mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jake Mannix <>
Subject Re: Vectors (from raw text) with more than one word values
Date Tue, 06 Aug 2013 12:32:41 GMT
Indeed, and our seq2sparse utility enables this directly.  If you ask for
cmdline help from seq2sparse, you'll see a bunch of options you maybe don't

$ ./bin/mahout seq2sparse -h
Error: Could not find or load main class classpath
Running on hadoop, using /usr/local/Cellar/hadoop/0.20.1/libexec/bin/hadoop

 [--minSupport <minSupport> --analyzerName <analyzerName> --chunkSize

<chunkSize> --output <output> --input <input> --minDF <minDF> --maxDFSigma

<maxDFSigma> --maxDFPercent <maxDFPercent> --weight <weight> --norm <norm>

--minLLR <minLLR> --numReducers <numReducers> --maxNGramSize <ngramSize>

--overwrite --help --sequentialAccessVector --namedVector --logNormalize]


  --minSupport (-s) minSupport        (Optional) Minimum Support. Default

                                      Value: 2

  --analyzerName (-a) analyzerName    The class name of the analyzer

  --chunkSize (-chunk) chunkSize      The chunkSize in MegaBytes. 100-10000
  --output (-o) output                The directory pathname for output.

  --input (-i) input                  Path to job input directory.

  --minDF (-md) minDF                 The minimum document frequency.
                                      is 1

  --maxDFSigma (-xs) maxDFSigma       What portion of the tf (tf-idf)
                                      to be used, expressed in times the

                                      standard deviation (sigma) of the

                                      document frequencies of these
                                      Can be used to remove really high

                                      frequency terms. Expressed as a
                                      value. Good value to be specified is
                                      In case the value is less than 0 no

                                      vectors will be filtered out. Default
                                      -1.0.  Overrides maxDFPercent

  --maxDFPercent (-x) maxDFPercent    The max percentage of docs for the
                                      Can be used to remove really high

                                      frequency terms. Expressed as an
                                      between 0 and 100. Default is 99.  If

                                      maxDFSigma is also set, it will
                                      this value.

  --weight (-wt) weight               The kind of weight to use. Currently
                                      or TFIDF

  --norm (-n) norm                    The norm to use, expressed as either
                                      float or "INF" if you want to use the

                                      Infinite norm.  Must be greater or
                                      to 0.  The default is not to
  --minLLR (-ml) minLLR               (Optional)The minimum Log Likelihood

                                      Ratio(Float)  Default is 1.0

  --numReducers (-nr) numReducers     (Optional) Number of reduce tasks.

                                      Default Value: 1

  --maxNGramSize (-ng) ngramSize      (Optional) The maximum size of ngrams
                                      create (2 = bigrams, 3 = trigrams,
                                      Default Value:1

  --overwrite (-ow)                   If set, overwrite the output
  --help (-h)                         Print out help

  --sequentialAccessVector (-seq)     (Optional) Whether output vectors
                                      be SequentialAccessVectors. If set
                                      else false

  --namedVector (-nv)                 (Optional) Whether output vectors
                                      be NamedVectors. If set true else
  --logNormalize (-lnorm)             (Optional) Whether output vectors
                                      be logNormalize. If set true else
13/08/06 04:44:45 INFO driver.MahoutDriver: Program took 158 ms (Minutes:


In particular, "--maxNGramSize 3" says you don't want just raw tokens, but
bigrams like "Arnold Schwartzenegger" and trigrams like "New York Yankees".
 To decide *which* ones to use (because there are *way* too many 2 and
3grams if you take all of them), the simple technique we have in this
utility is by a) filter by document frequency, either on the high end: get
rid of features [either tokens/unigrams or ngrams for n > 1] which occur
too frequently, by setting --maxDFPercent 95 [this drops the 5% of most
commonly occurring features] or --maxDFSigma 3.0 [this drops all tokens
with doc frequency > 3 sigma higher than the mean], or by getting rid of
features which occur too rarely, with --minDF 2 : this would make sure that
features which occur less than 2 times get dropped, b) filter by log
likelihood ratio: --minLLR 10.0 sets the minimum LLR for an ngram to be at
least 10.0.  For a more detailed explanation of ngrams and LLR, Ted's
classic blog post<>may
be helpful.

  The TL;DR of it is that for practical purposes, you really want to try a
simple run with say "--minLLR 1.0", see how many ngrams are left, and how
good they look, and what they're LLR is, and then bump the value up to
something which gets rid of more of the crappy ones - maybe its 10.0, maybe
it's 15.0, maybe it's 25.0, depends on your data, and how many features you
want to end up with in the end of the day.

On Tue, Aug 6, 2013 at 3:05 AM, Marco <> wrote:

> Is it possible to have vectors components from raw text samples with more
> than one word?
> Example:
> Key: California: Value: "Arnold Schwarzenegger" "San Andreas Fault"
> (I've put quotation marks just to show how I'd like to group vector's
> values)



  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message