labs-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From tomm...@apache.org
Subject svn commit: r1707515 - in /labs/yay/trunk/core/src/test: java/org/apache/yay/core/Word2VecTest.java resources/word2vec/sentences.txt
Date Thu, 08 Oct 2015 12:46:25 GMT
Author: tommaso
Date: Thu Oct  8 12:46:25 2015
New Revision: 1707515

URL: http://svn.apache.org/viewvc?rev=1707515&view=rev
Log:
adjusted random weights generation, added few more sentences

Modified:
    labs/yay/trunk/core/src/test/java/org/apache/yay/core/Word2VecTest.java
    labs/yay/trunk/core/src/test/resources/word2vec/sentences.txt

Modified: labs/yay/trunk/core/src/test/java/org/apache/yay/core/Word2VecTest.java
URL: http://svn.apache.org/viewvc/labs/yay/trunk/core/src/test/java/org/apache/yay/core/Word2VecTest.java?rev=1707515&r1=1707514&r2=1707515&view=diff
==============================================================================
--- labs/yay/trunk/core/src/test/java/org/apache/yay/core/Word2VecTest.java (original)
+++ labs/yay/trunk/core/src/test/java/org/apache/yay/core/Word2VecTest.java Thu Oct  8 12:46:25
2015
@@ -65,19 +65,18 @@ public class Word2VecTest {
     Collection<String> fragments = getFragments(sentences, 4);
     assertFalse(fragments.isEmpty());
 
-    // TODO : make it possible to define the no. of hidden units
-    //    int n = new Random().nextInt(20);
     TrainingSet<Double, Double> trainingSet = createTrainingSet(vocabulary, fragments);
 
     TrainingExample<Double, Double> next = trainingSet.iterator().next();
     int inputSize = next.getFeatures().size() ;
     int outputSize = next.getOutput().length;
-    RealMatrix[] randomWeights = createRandomWeights(inputSize, inputSize, outputSize);
+    int n = new Random().nextInt(20);
+    RealMatrix[] randomWeights = createRandomWeights(inputSize, n, outputSize);
 
     FeedForwardStrategy predictionStrategy = new FeedForwardStrategy(new IdentityActivationFunction<Double>());
-    BackPropagationLearningStrategy learningStrategy = new BackPropagationLearningStrategy(BackPropagationLearningStrategy.
-            DEFAULT_ALPHA, -1, BackPropagationLearningStrategy.DEFAULT_THRESHOLD, predictionStrategy,
new LMSCostFunction(),
-            10);
+    BackPropagationLearningStrategy learningStrategy = new BackPropagationLearningStrategy(0.0005d,
-1,
+            BackPropagationLearningStrategy.DEFAULT_THRESHOLD, predictionStrategy, new LMSCostFunction(),
+            30);
     NeuralNetwork neuralNetwork = NeuralNetworkFactory.create(randomWeights, learningStrategy,
predictionStrategy);
 
     neuralNetwork.learn(trainingSet);
@@ -272,10 +271,10 @@ public class Word2VecTest {
 
     RealMatrix[] initialWeights = new RealMatrix[weightsCount];
     for (int i = 0; i < weightsCount; i++) {
-      int rows = inputSize;
+      int rows = hiddenSize;
       int cols;
       if (i == 0) {
-        cols = hiddenSize;
+        cols = inputSize;
       } else {
         cols = initialWeights[i - 1].getRowDimension();
         if (i == weightsCount - 1) {

Modified: labs/yay/trunk/core/src/test/resources/word2vec/sentences.txt
URL: http://svn.apache.org/viewvc/labs/yay/trunk/core/src/test/resources/word2vec/sentences.txt?rev=1707515&r1=1707514&r2=1707515&view=diff
==============================================================================
--- labs/yay/trunk/core/src/test/resources/word2vec/sentences.txt (original)
+++ labs/yay/trunk/core/src/test/resources/word2vec/sentences.txt Thu Oct  8 12:46:25 2015
@@ -12,4 +12,15 @@ By subsampling of the frequent words we
 We also describe a simple alternative to the hierarchical softmax called negative sampling
 An inherent limitation of word representations is their indifference to word order and their
inability to represent idiomatic phrases
 For example the meanings of “Canada” and “Air” cannot be easily combined
to obtain “Air Canada”
-Motivated by this example we present a simple method for finding phrases in text and show
that learning good vector representations for millions of phrases is possible
\ No newline at end of file
+Motivated by this example we present a simple method for finding phrases in text and show
that learning good vector representations for millions of phrases is possible
+The similarity metrics used for nearest neighbor evaluations produce a single scalar that
quantifies the relatedness of two words
+This simplicity can be problematic since two given words almost always exhibit more intricate
relationships than can be captured by a single number
+For example man may be regarded as similar to woman in that both words describe human beings
on the other hand the two words are often considered opposites since they highlight a primary
axis along which humans differ from one another
+In order to capture in a quantitative way the nuance necessary to distinguish man from woman
it is necessary for a model to associate more than a single number to the word pair
+A natural and simple candidate for an enlarged set of discriminative numbers is the vector
difference between the two word vectors
+GloVe is designed in order that such vector differences capture as much as possible the meaning
specified by the juxtaposition of two words
+Unsupervised word representations are very useful in NLP tasks both as inputs to learning
algorithms and as extra word features in NLP systems
+However most of these models are built with only local context and one representation per
word
+This is problematic because words are often polysemous and global context can also provide
useful information for learning word meanings
+We present a new neural network architecture which 1) learns word embeddings that better
capture the semantics of words by incorporating both local and global document context and
2) accounts for homonymy and polysemy by learning multiple embeddings per word
+We introduce a new dataset with human judgments on pairs of words in sentential context and
evaluate our model on it showing that our model outperforms competitive baselines and other
neural language models
\ No newline at end of file



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@labs.apache.org
For additional commands, e-mail: commits-help@labs.apache.org


Mime
View raw message