Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 45107200C8F for ; Fri, 5 May 2017 03:41:17 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 4397A160BC4; Fri, 5 May 2017 01:41:17 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 9BC80160BC5 for ; Fri, 5 May 2017 03:41:14 +0200 (CEST) Received: (qmail 60825 invoked by uid 500); 5 May 2017 01:41:08 -0000 Mailing-List: contact commits-help@mahout.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@mahout.apache.org Delivered-To: mailing list commits@mahout.apache.org Received: (qmail 58365 invoked by uid 99); 5 May 2017 01:41:07 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 05 May 2017 01:41:07 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 7A4E3F1717; Fri, 5 May 2017 01:41:06 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: rawkintrevo@apache.org To: commits@mahout.apache.org Date: Fri, 05 May 2017 01:41:26 -0000 Message-Id: <185e994db1b04dbdb6d8b819aaf87790@git.apache.org> In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [22/62] [abbrv] mahout git commit: WEBSITE Ported MR-Clustering Tutorials and Algos archived-at: Fri, 05 May 2017 01:41:17 -0000 WEBSITE Ported MR-Clustering Tutorials and Algos Project: http://git-wip-us.apache.org/repos/asf/mahout/repo Commit: http://git-wip-us.apache.org/repos/asf/mahout/commit/516e3fb9 Tree: http://git-wip-us.apache.org/repos/asf/mahout/tree/516e3fb9 Diff: http://git-wip-us.apache.org/repos/asf/mahout/diff/516e3fb9 Branch: refs/heads/master Commit: 516e3fb9ab340a2e641770e32d6511c8c7b365b6 Parents: b582dc5 Author: rawkintrevo Authored: Mon May 1 16:29:39 2017 -0500 Committer: rawkintrevo Committed: Mon May 1 16:29:39 2017 -0500 ---------------------------------------------------------------------- website/docs/_includes/mr_algo_navbar.html | 56 ++- website/docs/_includes/mr_tutorial_navbar.html | 29 +- .../map-reduce/clustering/canopy-clustering.md | 188 ++++++++ .../map-reduce/clustering/cluster-dumper.md | 106 +++++ .../clustering/expectation-maximization.md | 62 +++ .../map-reduce/clustering/fuzzy-k-means.md | 184 ++++++++ .../clustering/hierarchical-clustering.md | 15 + .../map-reduce/clustering/k-means-clustering.md | 182 ++++++++ .../clustering/latent-dirichlet-allocation.md | 155 +++++++ .../clustering/llr---log-likelihood-ratio.md | 46 ++ .../clustering/spectral-clustering.md | 84 ++++ .../map-reduce/clustering/streaming-k-means.md | 174 ++++++++ .../map-reduce/clustering/20newsgroups.md | 11 + .../map-reduce/clustering/canopy-commandline.md | 70 +++ .../clustering-of-synthetic-control-data.md | 53 +++ .../clustering/clustering-seinfeld-episodes.md | 11 + .../map-reduce/clustering/clusteringyourdata.md | 126 ++++++ .../clustering/fuzzy-k-means-commandline.md | 97 ++++ .../clustering/k-means-commandline.md | 94 ++++ .../map-reduce/clustering/lda-commandline.md | 83 ++++ .../map-reduce/clustering/viewing-result.md | 15 + .../map-reduce/clustering/viewing-results.md | 49 +++ .../clustering/visualizing-sample-clusters.md | 50 +++ .../map-reduce/misc/mr---map-reduce.md | 19 + .../misc/parallel-frequent-pattern-mining.md | 185 ++++++++ .../map-reduce/misc/perceptron-and-winnow.md | 41 ++ .../docs/tutorials/map-reduce/misc/testing.md | 46 ++ .../misc/using-mahout-with-python-via-jpype.md | 222 ++++++++++ .../map-reduce/recommender/intro-als-hadoop.md | 98 +++++ .../recommender/intro-cooccurrence-spark.md | 437 +++++++++++++++++++ .../recommender/intro-itembased-hadoop.md | 54 +++ .../recommender/matrix-factorization.md | 187 ++++++++ .../map-reduce/recommender/quickstart.md | 32 ++ .../recommender/recommender-documentation.md | 277 ++++++++++++ .../recommender/recommender-first-timer-faq.md | 54 +++ .../recommender/userbased-5-minutes.md | 133 ++++++ 36 files changed, 3697 insertions(+), 28 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/_includes/mr_algo_navbar.html ---------------------------------------------------------------------- diff --git a/website/docs/_includes/mr_algo_navbar.html b/website/docs/_includes/mr_algo_navbar.html index 87dbdde..48c0502 100644 --- a/website/docs/_includes/mr_algo_navbar.html +++ b/website/docs/_includes/mr_algo_navbar.html @@ -1,28 +1,42 @@ + http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/_includes/mr_tutorial_navbar.html ---------------------------------------------------------------------- diff --git a/website/docs/_includes/mr_tutorial_navbar.html b/website/docs/_includes/mr_tutorial_navbar.html index 844c701..b9f0140 100644 --- a/website/docs/_includes/mr_tutorial_navbar.html +++ b/website/docs/_includes/mr_tutorial_navbar.html @@ -1,14 +1,29 @@ \ No newline at end of file http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/algorithms/map-reduce/clustering/canopy-clustering.md ---------------------------------------------------------------------- diff --git a/website/docs/algorithms/map-reduce/clustering/canopy-clustering.md b/website/docs/algorithms/map-reduce/clustering/canopy-clustering.md new file mode 100644 index 0000000..6571038 --- /dev/null +++ b/website/docs/algorithms/map-reduce/clustering/canopy-clustering.md @@ -0,0 +1,188 @@ +--- +layout: mr_algorithm +title: Canopy Clustering +theme: + name: retro-mahout +--- + + +# Canopy Clustering + +[Canopy Clustering](http://www.kamalnigam.com/papers/canopy-kdd00.pdf) + is a very simple, fast and surprisingly accurate method for grouping +objects into clusters. All objects are represented as a point in a +multidimensional feature space. The algorithm uses a fast approximate +distance metric and two distance thresholds T1 > T2 for processing. The +basic algorithm is to begin with a set of points and remove one at random. +Create a Canopy containing this point and iterate through the remainder of +the point set. At each point, if its distance from the first point is < T1, +then add the point to the cluster. If, in addition, the distance is < T2, +then remove the point from the set. This way points that are very close to +the original will avoid all further processing. The algorithm loops until +the initial set is empty, accumulating a set of Canopies, each containing +one or more points. A given point may occur in more than one Canopy. + +Canopy Clustering is often used as an initial step in more rigorous +clustering techniques, such as [K-Means Clustering](k-means-clustering.html) +. By starting with an initial clustering the number of more expensive +distance measurements can be significantly reduced by ignoring points +outside of the initial canopies. + +**WARNING**: Canopy is deprecated in the latest release and will be removed once streaming k-means becomes stable enough. + + +## Strategy for parallelization + +Looking at the sample Hadoop implementation in [http://code.google.com/p/canopy-clustering/](http://code.google.com/p/canopy-clustering/) + the processing is done in 3 M/R steps: +1. The data is massaged into suitable input format +1. Each mapper performs canopy clustering on the points in its input set and +outputs its canopies' centers +1. The reducer clusters the canopy centers to produce the final canopy +centers +1. The points are then clustered into these final canopies + +Some ideas can be found in [Cluster computing and MapReduce](https://www.youtube.com/watch?v=yjPBkvYh-ss&list=PLEFAB97242917704A) + lecture video series \[by Google(r)\]; Canopy Clustering is discussed in [lecture #4](https://www.youtube.com/watch?v=1ZDybXl212Q) +. Finally here is the [Wikipedia page](http://en.wikipedia.org/wiki/Canopy_clustering_algorithm) +. + + +## Design of implementation + +The implementation accepts as input Hadoop SequenceFiles containing +multidimensional points (VectorWritable). Points may be expressed either as +dense or sparse Vectors and processing is done in two phases: Canopy +generation and, optionally, Clustering. + + +### Canopy generation phase + +During the map step, each mapper processes a subset of the total points and +applies the chosen distance measure and thresholds to generate canopies. In +the mapper, each point which is found to be within an existing canopy will +be added to an internal list of Canopies. After observing all its input +vectors, the mapper updates all of its Canopies and normalizes their totals +to produce canopy centroids which are output, using a constant key +("centroid") to a single reducer. The reducer receives all of the initial +centroids and again applies the canopy measure and thresholds to produce a +final set of canopy centroids which is output (i.e. clustering the cluster +centroids). The reducer output format is: SequenceFile(Text, Canopy) with +the _key_ encoding the canopy identifier. + + +### Clustering phase + +During the clustering phase, each mapper reads the Canopies produced by the +first phase. Since all mappers have the same canopy definitions, their +outputs will be combined during the shuffle so that each reducer (many are +allowed here) will see all of the points assigned to one or more canopies. +The output format will then be: SequenceFile(IntWritable, +WeightedVectorWritable) with the _key_ encoding the canopyId. The +WeightedVectorWritable has two fields: a double weight and a VectorWritable +vector. Together they encode the probability that each vector is a member +of the given canopy. + + +## Running Canopy Clustering + +The canopy clustering algorithm may be run using a command-line invocation +on CanopyDriver.main or by making a Java call to CanopyDriver.run(...). +Both require several arguments: + +Invocation using the command line takes the form: + + + bin/mahout canopy \ + -i \ + -o \ + -dm \ + -t1 \ + -t2 \ + -t3 \ + -t4 \ + -cf \ + -ow + -cl + -xm + + +Invocation using Java involves supplying the following arguments: + +1. input: a file path string to a directory containing the input data set a +SequenceFile(WritableComparable, VectorWritable). The sequence file _key_ +is not used. +1. output: a file path string to an empty directory which is used for all +output from the algorithm. +1. measure: the fully-qualified class name of an instance of DistanceMeasure +which will be used for the clustering. +1. t1: the T1 distance threshold used for clustering. +1. t2: the T2 distance threshold used for clustering. +1. t3: the optional T1 distance threshold used by the reducer for +clustering. If not specified, T1 is used by the reducer. +1. t4: the optional T2 distance threshold used by the reducer for +clustering. If not specified, T2 is used by the reducer. +1. clusterFilter: the minimum size for canopies to be output by the +algorithm. Affects both sequential and mapreduce execution modes, and +mapper and reducer outputs. +1. runClustering: a boolean indicating, if true, that the clustering step is +to be executed after clusters have been determined. +1. runSequential: a boolean indicating, if true, that the computation is to +be run in memory using the reference Canopy implementation. Note: that the +sequential implementation performs a single pass through the input vectors +whereas the MapReduce implementation performs two passes (once in the +mapper and again in the reducer). The MapReduce implementation will +typically produce less clusters than the sequential implementation as a +result. + +After running the algorithm, the output directory will contain: +1. clusters-0: a directory containing SequenceFiles(Text, Canopy) produced +by the algorithm. The Text _key_ contains the cluster identifier of the +Canopy. +1. clusteredPoints: (if runClustering enabled) a directory containing +SequenceFile(IntWritable, WeightedVectorWritable). The IntWritable _key_ is +the canopyId. The WeightedVectorWritable _value_ is a bean containing a +double _weight_ and a VectorWritable _vector_ where the weight indicates +the probability that the vector is a member of the canopy. For canopy +clustering, the weights are computed as 1/(1+distance) where the distance +is between the cluster center and the vector using the chosen +DistanceMeasure. + + +# Examples + +The following images illustrate Canopy clustering applied to a set of +randomly-generated 2-d data points. The points are generated using a normal +distribution centered at a mean location and with a constant standard +deviation. See the README file in the [/examples/src/main/java/org/apache/mahout/clustering/display/README.txt](https://github.com/apache/mahout/blob/master/examples/src/main/java/org/apache/mahout/clustering/display/README.txt) + for details on running similar examples. + +The points are generated as follows: + +* 500 samples m=\[1.0, 1.0\](1.0,-1.0\.html) + sd=3.0 +* 300 samples m=\[1.0, 0.0\](1.0,-0.0\.html) + sd=0.5 +* 300 samples m=\[0.0, 2.0\](0.0,-2.0\.html) + sd=0.1 + +In the first image, the points are plotted and the 3-sigma boundaries of +their generator are superimposed. + +![sample data](../../images/SampleData.png) + +In the second image, the resulting canopies are shown superimposed upon the +sample data. Each canopy is represented by two circles, with radius T1 and +radius T2. + +![canopy](../../images/Canopy.png) + +The third image uses the same values of T1 and T2 but only superimposes +canopies covering more than 10% of the population. This is a bit better +representation of the data but it still has lots of room for improvement. +The advantage of Canopy clustering is that it is single-pass and fast +enough to iterate runs using different T1, T2 parameters and display +thresholds. + +![canopy](../../images/Canopy10.png) + http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/algorithms/map-reduce/clustering/cluster-dumper.md ---------------------------------------------------------------------- diff --git a/website/docs/algorithms/map-reduce/clustering/cluster-dumper.md b/website/docs/algorithms/map-reduce/clustering/cluster-dumper.md new file mode 100644 index 0000000..e85ef99 --- /dev/null +++ b/website/docs/algorithms/map-reduce/clustering/cluster-dumper.md @@ -0,0 +1,106 @@ +--- +layout: mr_algorithm +title: Cluster Dumper +theme: + name: retro-mahout +--- + + +## Cluster Dumper - Introduction + +Clustering tasks in Mahout will output data in the format of a SequenceFile +(Text, Cluster) and the Text is a cluster identifier string. To analyze +this output we need to convert the sequence files to a human readable +format and this is achieved using the clusterdump utility. + + +## Steps for analyzing cluster output using clusterdump utility + +After you've executed a clustering tasks (either examples or real-world), +you can run clusterdumper in 2 modes: + + +1. Hadoop Environment +1. Standalone Java Program + + + +### Hadoop Environment + +If you have setup your HADOOP_HOME environment variable, you can use the +command line utility `mahout` to execute the ClusterDumper on Hadoop. In +this case we wont need to get the output clusters to our local machines. +The utility will read the output clusters present in HDFS and output the +human-readable cluster values into our local file system. Say you've just +executed the [synthetic control example ](clustering-of-synthetic-control-data.html) + and want to analyze the output, you can execute the `mahout clusterdumper` utility from the command line. + +#### CLI options: + --help Print out help + --input (-i) input The directory containing Sequence + Files for the Clusters + --output (-o) output The output file. If not specified, + dumps to the console. + --outputFormat (-of) outputFormat The optional output format to write + the results as. Options: TEXT, CSV, or GRAPH_ML + --substring (-b) substring The number of chars of the + asFormatString() to print + --pointsDir (-p) pointsDir The directory containing points + sequence files mapping input vectors + to their cluster. If specified, + then the program will output the + points associated with a cluster + --dictionary (-d) dictionary The dictionary file. + --dictionaryType (-dt) dictionaryType The dictionary file type + (text|sequencefile) + --distanceMeasure (-dm) distanceMeasure The classname of the DistanceMeasure. + Default is SquaredEuclidean. + --numWords (-n) numWords The number of top terms to print + --tempDir tempDir Intermediate output directory + --startPhase startPhase First phase to run + --endPhase endPhase Last phase to run + --evaluate (-e) Run ClusterEvaluator and CDbwEvaluator over the + input. The output will be appended to the rest of + the output at the end. + +### Standalone Java Program + +Run the clusterdump utility as follows as a standalone Java Program through Eclipse. + To execute ClusterDumper.java, + +* Under mahout-utils, Right-Click on ClusterDumper.java +* Choose Run-As, Run Configurations +* On the left menu, click on Java Application +* On the top-bar click on "New Launch Configuration" +* A new launch should be automatically created with project as + + "mahout-utils" and Main Class as "org.apache.mahout.utils.clustering.ClusterDumper" + +In the arguments tab, specify the below arguments + + + --seqFileDir /examples/output/clusters-10 + --pointsDir /examples/output/clusteredPoints + --output /examples/output/clusteranalyze.txt + replace with the actual path of your $MAHOUT_HOME + +* Hit run to execute the ClusterDumper using Eclipse. Setting breakpoints etc should just work fine. + +Reading the output file + +This will output the clusters into a file called clusteranalyze.txt inside $MAHOUT_HOME/examples/output +Sample data will look like + +CL-0 { n=116 c=[29.922, 30.407, 30.373, 30.094, 29.886, 29.937, 29.751, 30.054, 30.039, 30.126, 29.764, 29.835, 30.503, 29.876, 29.990, 29.605, 29.379, 30.120, 29.882, 30.161, 29.825, 30.074, 30.001, 30.421, 29.867, 29.736, 29.760, 30.192, 30.134, 30.082, 29.962, 29.512, 29.736, 29.594, 29.493, 29.761, 29.183, 29.517, 29.273, 29.161, 29.215, 29.731, 29.154, 29.113, 29.348, 28.981, 29.543, 29.192, 29.479, 29.406, 29.715, 29.344, 29.628, 29.074, 29.347, 29.812, 29.058, 29.177, 29.063, 29.607](29.922,-30.407,-30.373,-30.094,-29.886,-29.937,-29.751,-30.054,-30.039,-30.126,-29.764,-29.835,-30.503,-29.876,-29.990,-29.605,-29.379,-30.120,-29.882,-30.161,-29.825,-30.074,-30.001,-30.421,-29.867,-29.736,-29.760,-30.192,-30.134,-30.082,-29.962,-29.512,-29.736,-29.594,-29.493,-29.761,-29.183,-29.517,-29.273,-29.161,-29.215,-29.731,-29.154,-29.113,-29.348,-28.981,-29.543,-29.192,-29.479,-29.406,-29.715,-29.344,-29.628,-29.074,-29.347,-29.812,-29.058,-29.177,-29.063,-29.607.html) + r=[3.463, 3.351, 3.452, 3.438, 3.371, 3.569, 3.253, 3.531, 3.439, 3.472, +3.402, 3.459, 3.320, 3.260, 3.430, 3.452, 3.320, 3.499, 3.302, 3.511, +3.520, 3.447, 3.516, 3.485, 3.345, 3.178, 3.492, 3.434, 3.619, 3.483, +3.651, 3.833, 3.812, 3.433, 4.133, 3.855, 4.123, 3.999, 4.467, 4.731, +4.539, 4.956, 4.644, 4.382, 4.277, 4.918, 4.784, 4.582, 4.915, 4.607, +4.672, 4.577, 5.035, 5.241, 4.731, 4.688, 4.685, 4.657, 4.912, 4.300] } + +and on... + +where CL-0 is the Cluster 0 and n=116 refers to the number of points observed by this cluster and c = \[29.922 ...\] + refers to the center of Cluster as a vector and r = \[3.463 ..\] refers to +the radius of the cluster as a vector. \ No newline at end of file http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/algorithms/map-reduce/clustering/expectation-maximization.md ---------------------------------------------------------------------- diff --git a/website/docs/algorithms/map-reduce/clustering/expectation-maximization.md b/website/docs/algorithms/map-reduce/clustering/expectation-maximization.md new file mode 100644 index 0000000..b02401f --- /dev/null +++ b/website/docs/algorithms/map-reduce/clustering/expectation-maximization.md @@ -0,0 +1,62 @@ +--- +layout: mr_algorithm +title: Expectation Maximization +theme: + name: retro-mahout +--- + +# Expectation Maximization + +The principle of EM can be applied to several learning settings, but is +most commonly associated with clustering. The main principle of the +algorithm is comparable to k-Means. Yet in contrast to hard cluster +assignments, each object is given some probability to belong to a cluster. +Accordingly cluster centers are recomputed based on the average of all +objects weighted by their probability of belonging to the cluster at hand. + + +## Canopy-modified EM + +One can also use the canopies idea to speed up prototypebased clustering +methods like K-means and Expectation-Maximization (EM). In general, neither +K-means nor EMspecify how many clusters to use. The canopies technique does +not help this choice. + +Prototypes (our estimates of the cluster centroids) are associated with the +canopies that contain them, and the prototypes are only influenced by data +that are inside their associated canopies. After creating the canopies, we +decide how many prototypes will be created for each canopy. This could be +done, for example, using the number of data points in a canopy and AIC or +BIC where points that occur in more than one canopy are counted +fractionally. Then we place prototypesinto each canopy. This initial +placement can be random, as long as it is within the canopy in question, as +determined by the inexpensive distance metric. + +Then, instead of calculating the distance from each prototype to every +point (as is traditional, a O(nk) operation), theE-step instead calculates +the distance from each prototype to a much smaller number of points. For +each prototype, we find the canopies that contain it (using the cheap +distance metric), and only calculate distances (using the expensive +distance metric) from that prototype to points within those canopies. + +Note that by this procedure prototypes may move across canopy boundaries +when canopies overlap. Prototypes may move to cover the data in the +overlapping region, and then move entirely into another canopy in order to +cover data there. + +The canopy-modified EM algorithm behaves very similarly to traditional EM, +with the slight difference that points outside the canopy have no influence +on points in the canopy, rather than a minute influence. If the canopy +property holds, and points in the same cluster fall in the same canopy, +then the canopy-modified EM will almost always converge to the same maximum +in likelihood as the traditional EM. In fact, the difference in each +iterative step (apart from the enormous computational savings of computing +fewer terms) will be negligible since points outside the canopy will have +exponentially small influence. + + +## Strategy for Parallelization + + +## Map/Reduce Implementation + http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/algorithms/map-reduce/clustering/fuzzy-k-means.md ---------------------------------------------------------------------- diff --git a/website/docs/algorithms/map-reduce/clustering/fuzzy-k-means.md b/website/docs/algorithms/map-reduce/clustering/fuzzy-k-means.md new file mode 100644 index 0000000..6be9166 --- /dev/null +++ b/website/docs/algorithms/map-reduce/clustering/fuzzy-k-means.md @@ -0,0 +1,184 @@ +--- +layout: mr_algorithm +title: Fuzzy K-Means +theme: + name: retro-mahout +--- + +Fuzzy K-Means (also called Fuzzy C-Means) is an extension of [K-Means](http://mahout.apache.org/users/clustering/k-means-clustering.html) +, the popular simple clustering technique. While K-Means discovers hard +clusters (a point belong to only one cluster), Fuzzy K-Means is a more +statistically formalized method and discovers soft clusters where a +particular point can belong to more than one cluster with certain +probability. + + +#### Algorithm + +Like K-Means, Fuzzy K-Means works on those objects which can be represented +in n-dimensional vector space and a distance measure is defined. +The algorithm is similar to k-means. + +* Initialize k clusters +* Until converged + * Compute the probability of a point belong to a cluster for every pair + * Recompute the cluster centers using above probability membership values of points to clusters + + +#### Design Implementation + +The design is similar to K-Means present in Mahout. It accepts an input +file containing vector points. User can either provide the cluster centers +as input or can allow canopy algorithm to run and create initial clusters. + +Similar to K-Means, the program doesn't modify the input directories. And +for every iteration, the cluster output is stored in a directory cluster-N. +The code has set number of reduce tasks equal to number of map tasks. So, +those many part-0 + + +Files are created in clusterN directory. The code uses +driver/mapper/combiner/reducer as follows: + +FuzzyKMeansDriver - This is similar to  KMeansDriver. It iterates over +input points and cluster points for specified number of iterations or until +it is converged.During every iteration i, a new cluster-i directory is +created which contains the modified cluster centers obtained during +FuzzyKMeans iteration. This will be feeded as input clusters in the next +iteration.  Once Fuzzy KMeans is run for specified number of +iterations or until it is converged, a map task is run to output "the point +and the cluster membership to each cluster" pair as final output to a +directory named "points". + +FuzzyKMeansMapper - reads the input cluster during its configure() method, +then  computes cluster membership probability of a point to each +cluster.Cluster membership is inversely propotional to the distance. +Distance is computed using  user supplied distance measure. Output key +is encoded clusterId. Output values are ClusterObservations containing +observation statistics. + +FuzzyKMeansCombiner - receives all key:value pairs from the mapper and +produces partial sums of the cluster membership probability times input +vectors for each cluster. Output key is: encoded cluster identifier. Output +values are ClusterObservations containing observation statistics. + +FuzzyKMeansReducer - Multiple reducers receives certain keys and all values +associated with those keys. The reducer sums the values to produce a new +centroid for the cluster which is output. Output key is: encoded cluster +identifier (e.g. "C14". Output value is: formatted cluster identifier (e.g. +"C14"). The reducer encodes unconverged clusters with a 'Cn' cluster Id and +converged clusters with 'Vn' clusterId. + + +## Running Fuzzy k-Means Clustering + +The Fuzzy k-Means clustering algorithm may be run using a command-line +invocation on FuzzyKMeansDriver.main or by making a Java call to +FuzzyKMeansDriver.run(). + +Invocation using the command line takes the form: + + + bin/mahout fkmeans \ + -i \ + -c \ + -o \ + -dm \ + -m 1> \ + -x \ + -k \ + -cd \ + -ow + -cl + -e + -t + -xm + + +*Note:* if the -k argument is supplied, any clusters in the -c directory +will be overwritten and -k random points will be sampled from the input +vectors to become the initial cluster centers. + +Invocation using Java involves supplying the following arguments: + +1. input: a file path string to a directory containing the input data set a +SequenceFile(WritableComparable, VectorWritable). The sequence file _key_ +is not used. +1. clustersIn: a file path string to a directory containing the initial +clusters, a SequenceFile(key, SoftCluster | Cluster | Canopy). Fuzzy +k-Means SoftClusters, k-Means Clusters and Canopy Canopies may be used for +the initial clusters. +1. output: a file path string to an empty directory which is used for all +output from the algorithm. +1. measure: the fully-qualified class name of an instance of DistanceMeasure +which will be used for the clustering. +1. convergence: a double value used to determine if the algorithm has +converged (clusters have not moved more than the value in the last +iteration) +1. max-iterations: the maximum number of iterations to run, independent of +the convergence specified +1. m: the "fuzzyness" argument, a double > 1. For m equal to 2, this is +equivalent to normalising the coefficient linearly to make their sum 1. +When m is close to 1, then the cluster center closest to the point is given +much more weight than the others, and the algorithm is similar to k-means. +1. runClustering: a boolean indicating, if true, that the clustering step is +to be executed after clusters have been determined. +1. emitMostLikely: a boolean indicating, if true, that the clustering step +should only emit the most likely cluster for each clustered point. +1. threshold: a double indicating, if emitMostLikely is false, the cluster +probability threshold used for emitting multiple clusters for each point. A +value of 0 will emit all clusters with their associated probabilities for +each vector. +1. runSequential: a boolean indicating, if true, that the algorithm is to +use the sequential reference implementation running in memory. + +After running the algorithm, the output directory will contain: +1. clusters-N: directories containing SequenceFiles(Text, SoftCluster) +produced by the algorithm for each iteration. The Text _key_ is a cluster +identifier string. +1. clusteredPoints: (if runClustering enabled) a directory containing +SequenceFile(IntWritable, WeightedVectorWritable). The IntWritable _key_ is +the clusterId. The WeightedVectorWritable _value_ is a bean containing a +double _weight_ and a VectorWritable _vector_ where the weights are +computed as 1/(1+distance) where the distance is between the cluster center +and the vector using the chosen DistanceMeasure. + + +# Examples + +The following images illustrate Fuzzy k-Means clustering applied to a set +of randomly-generated 2-d data points. The points are generated using a +normal distribution centered at a mean location and with a constant +standard deviation. See the README file in the [/examples/src/main/java/org/apache/mahout/clustering/display/README.txt](https://github.com/apache/mahout/blob/master/examples/src/main/java/org/apache/mahout/clustering/display/README.txt) + for details on running similar examples. + +The points are generated as follows: + +* 500 samples m=\[1.0, 1.0\](1.0,-1.0\.html) + sd=3.0 +* 300 samples m=\[1.0, 0.0\](1.0,-0.0\.html) + sd=0.5 +* 300 samples m=\[0.0, 2.0\](0.0,-2.0\.html) + sd=0.1 + +In the first image, the points are plotted and the 3-sigma boundaries of +their generator are superimposed. + +![fuzzy]({{ BASE_PATH }}/assets/img/SampleData.png) + +In the second image, the resulting clusters (k=3) are shown superimposed upon the sample data. As Fuzzy k-Means is an iterative algorithm, the centers of the clusters in each recent iteration are shown using different colors. Bold red is the final clustering and previous iterations are shown in \[orange, yellow, green, blue, violet and gray\](orange,-yellow,-green,-blue,-violet-and-gray\.html) +. Although it misses a lot of the points and cannot capture the original, +superimposed cluster centers, it does a decent job of clustering this data. + +![fuzzy]({{ BASE_PATH }}/assets/img/FuzzyKMeans.png) + +The third image shows the results of running Fuzzy k-Means on a different +data set which is generated using asymmetrical standard deviations. +Fuzzy k-Means does a fair job handling this data set as well. + +![fuzzy]({{ BASE_PATH }}/assets/img/2dFuzzyKMeans.png) + + +#### References  + +* [http://en.wikipedia.org/wiki/Fuzzy_clustering](http://en.wikipedia.org/wiki/Fuzzy_clustering) \ No newline at end of file http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/algorithms/map-reduce/clustering/hierarchical-clustering.md ---------------------------------------------------------------------- diff --git a/website/docs/algorithms/map-reduce/clustering/hierarchical-clustering.md b/website/docs/algorithms/map-reduce/clustering/hierarchical-clustering.md new file mode 100644 index 0000000..35d1cd8 --- /dev/null +++ b/website/docs/algorithms/map-reduce/clustering/hierarchical-clustering.md @@ -0,0 +1,15 @@ +--- +layout: mr_algorithm +title: Hierarchical Clustering +theme: + name: retro-mahout +--- +Hierarchical clustering is the process or finding bigger clusters, and also +the smaller clusters inside the bigger clusters. + +In Apache Mahout, separate algorithms can be used for finding clusters at +different levels. + +See [Top Down Clustering](https://cwiki.apache.org/confluence/display/MAHOUT/Top+Down+Clustering) +. + http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/algorithms/map-reduce/clustering/k-means-clustering.md ---------------------------------------------------------------------- diff --git a/website/docs/algorithms/map-reduce/clustering/k-means-clustering.md b/website/docs/algorithms/map-reduce/clustering/k-means-clustering.md new file mode 100644 index 0000000..caaf634 --- /dev/null +++ b/website/docs/algorithms/map-reduce/clustering/k-means-clustering.md @@ -0,0 +1,182 @@ +--- +layout: mr_algorithm +title: K-Means Clustering +theme: + name: retro-mahout +--- + +# k-Means clustering - basics + +[k-Means](http://en.wikipedia.org/wiki/Kmeans) is a simple but well-known algorithm for grouping objects, clustering. All objects need to be represented +as a set of numerical features. In addition, the user has to specify the +number of groups (referred to as *k*) she wishes to identify. + +Each object can be thought of as being represented by some feature vector +in an _n_ dimensional space, _n_ being the number of all features used to +describe the objects to cluster. The algorithm then randomly chooses _k_ +points in that vector space, these point serve as the initial centers of +the clusters. Afterwards all objects are each assigned to the center they +are closest to. Usually the distance measure is chosen by the user and +determined by the learning task. + +After that, for each cluster a new center is computed by averaging the +feature vectors of all objects assigned to it. The process of assigning +objects and recomputing centers is repeated until the process converges. +The algorithm can be proven to converge after a finite number of +iterations. + +Several tweaks concerning distance measure, initial center choice and +computation of new average centers have been explored, as well as the +estimation of the number of clusters _k_. Yet the main principle always +remains the same. + + + + +## Quickstart + +[Here](https://github.com/apache/mahout/blob/master/examples/bin/cluster-reuters.sh) + is a short shell script outline that will get you started quickly with +k-means. This does the following: + +* Accepts clustering type: *kmeans*, *fuzzykmeans*, *lda*, or *streamingkmeans* +* Gets the Reuters dataset +* Runs org.apache.lucene.benchmark.utils.ExtractReuters to generate +reuters-out from reuters-sgm (the downloaded archive) +* Runs seqdirectory to convert reuters-out to SequenceFile format +* Runs seq2sparse to convert SequenceFiles to sparse vector format +* Runs k-means with 20 clusters +* Runs clusterdump to show results + +After following through the output that scrolls past, reading the code will +offer you a better understanding. + + + +## Implementation + +The implementation accepts two input directories: one for the data points +and one for the initial clusters. The data directory contains multiple +input files of SequenceFile(Key, VectorWritable), while the clusters +directory contains one or more SequenceFiles(Text, Cluster) +containing _k_ initial clusters or canopies. None of the input directories +are modified by the implementation, allowing experimentation with initial +clustering and convergence values. + +Canopy clustering can be used to compute the initial clusters for k-KMeans: + + // run the CanopyDriver job + CanopyDriver.runJob("testdata", "output" + ManhattanDistanceMeasure.class.getName(), (float) 3.1, (float) 2.1, false); + + // now run the KMeansDriver job + KMeansDriver.runJob("testdata", "output/clusters-0", "output", + EuclideanDistanceMeasure.class.getName(), "0.001", "10", true); + + +In the above example, the input data points are stored in 'testdata' and +the CanopyDriver is configured to output to the 'output/clusters-0' +directory. Once the driver executes it will contain the canopy definition +files. Upon running the KMeansDriver the output directory will have two or +more new directories: 'clusters-N'' containining the clusters for each +iteration and 'clusteredPoints' will contain the clustered data points. + +This diagram shows the examplary dataflow of the k-Means example +implementation provided by Mahout: + + + + +## Running k-Means Clustering + +The k-Means clustering algorithm may be run using a command-line invocation +on KMeansDriver.main or by making a Java call to KMeansDriver.runJob(). + +Invocation using the command line takes the form: + + + bin/mahout kmeans \ + -i \ + -c \ + -o \ + -k \ + -dm \ + -x \ + -cd \ + -ow + -cl + -xm + + +Note: if the \-k argument is supplied, any clusters in the \-c directory +will be overwritten and \-k random points will be sampled from the input +vectors to become the initial cluster centers. + +Invocation using Java involves supplying the following arguments: + +1. input: a file path string to a directory containing the input data set a +SequenceFile(WritableComparable, VectorWritable). The sequence file _key_ +is not used. +1. clusters: a file path string to a directory containing the initial +clusters, a SequenceFile(key, Cluster \| Canopy). Both KMeans clusters and +Canopy canopies may be used for the initial clusters. +1. output: a file path string to an empty directory which is used for all +output from the algorithm. +1. distanceMeasure: the fully-qualified class name of an instance of +DistanceMeasure which will be used for the clustering. +1. convergenceDelta: a double value used to determine if the algorithm has +converged (clusters have not moved more than the value in the last +iteration) +1. maxIter: the maximum number of iterations to run, independent of the +convergence specified +1. runClustering: a boolean indicating, if true, that the clustering step is +to be executed after clusters have been determined. +1. runSequential: a boolean indicating, if true, that the k-means sequential +implementation is to be used to process the input data. + +After running the algorithm, the output directory will contain: +1. clusters-N: directories containing SequenceFiles(Text, Cluster) produced +by the algorithm for each iteration. The Text _key_ is a cluster identifier +string. +1. clusteredPoints: (if \--clustering enabled) a directory containing +SequenceFile(IntWritable, WeightedVectorWritable). The IntWritable _key_ is +the clusterId. The WeightedVectorWritable _value_ is a bean containing a +double _weight_ and a VectorWritable _vector_ where the weight indicates +the probability that the vector is a member of the cluster. For k-Means +clustering, the weights are computed as 1/(1+distance) where the distance +is between the cluster center and the vector using the chosen +DistanceMeasure. + + +# Examples + +The following images illustrate k-Means clustering applied to a set of +randomly-generated 2-d data points. The points are generated using a normal +distribution centered at a mean location and with a constant standard +deviation. See the README file in the [/examples/src/main/java/org/apache/mahout/clustering/display/README.txt](https://github.com/apache/mahout/blob/master/examples/src/main/java/org/apache/mahout/clustering/display/README.txt) + for details on running similar examples. + +The points are generated as follows: + +* 500 samples m=\[1.0, 1.0\](1.0,-1.0\.html) + sd=3.0 +* 300 samples m=\[1.0, 0.0\](1.0,-0.0\.html) + sd=0.5 +* 300 samples m=\[0.0, 2.0\](0.0,-2.0\.html) + sd=0.1 + +In the first image, the points are plotted and the 3-sigma boundaries of +their generator are superimposed. + +![Sample data graph](../../images/SampleData.png) + +In the second image, the resulting clusters (k=3) are shown superimposed upon the sample data. As k-Means is an iterative algorithm, the centers of the clusters in each recent iteration are shown using different colors. Bold red is the final clustering and previous iterations are shown in \[orange, yellow, green, blue, violet and gray\](orange,-yellow,-green,-blue,-violet-and-gray\.html) +. Although it misses a lot of the points and cannot capture the original, +superimposed cluster centers, it does a decent job of clustering this data. + +![kmeans](../../images/KMeans.png) + +The third image shows the results of running k-Means on a different dataset, which is generated using asymmetrical standard deviations. +K-Means does a fair job handling this data set as well. + +![2d kmeans](../../images/2dKMeans.png) \ No newline at end of file http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/algorithms/map-reduce/clustering/latent-dirichlet-allocation.md ---------------------------------------------------------------------- diff --git a/website/docs/algorithms/map-reduce/clustering/latent-dirichlet-allocation.md b/website/docs/algorithms/map-reduce/clustering/latent-dirichlet-allocation.md new file mode 100644 index 0000000..105c6f5 --- /dev/null +++ b/website/docs/algorithms/map-reduce/clustering/latent-dirichlet-allocation.md @@ -0,0 +1,155 @@ +--- +layout: mr_algorithm +title: Latent Dirichlet Allocation +theme: + name: retro-mahout +--- + + +# Overview + +Latent Dirichlet Allocation (Blei et al, 2003) is a powerful learning +algorithm for automatically and jointly clustering words into "topics" and +documents into mixtures of topics. It has been successfully applied to +model change in scientific fields over time (Griffiths and Steyvers, 2004; +Hall, et al. 2008). + +A topic model is, roughly, a hierarchical Bayesian model that associates +with each document a probability distribution over "topics", which are in +turn distributions over words. For instance, a topic in a collection of +newswire might include words about "sports", such as "baseball", "home +run", "player", and a document about steroid use in baseball might include +"sports", "drugs", and "politics". Note that the labels "sports", "drugs", +and "politics", are post-hoc labels assigned by a human, and that the +algorithm itself only assigns associate words with probabilities. The task +of parameter estimation in these models is to learn both what the topics +are, and which documents employ them in what proportions. + +Another way to view a topic model is as a generalization of a mixture model +like [Dirichlet Process Clustering](http://en.wikipedia.org/wiki/Dirichlet_process) +. Starting from a normal mixture model, in which we have a single global +mixture of several distributions, we instead say that _each_ document has +its own mixture distribution over the globally shared mixture components. +Operationally in Dirichlet Process Clustering, each document has its own +latent variable drawn from a global mixture that specifies which model it +belongs to, while in LDA each word in each document has its own parameter +drawn from a document-wide mixture. + +The idea is that we use a probabilistic mixture of a number of models that +we use to explain some observed data. Each observed data point is assumed +to have come from one of the models in the mixture, but we don't know +which. The way we deal with that is to use a so-called latent parameter +which specifies which model each data point came from. + + +# Collapsed Variational Bayes +The CVB algorithm which is implemented in Mahout for LDA combines +advantages of both regular Variational Bayes and Gibbs Sampling. The +algorithm relies on modeling dependence of parameters on latest variables +which are in turn mutually independent. The algorithm uses 2 +methodologies to marginalize out parameters when calculating the joint +distribution and the other other is to model the posterior of theta and phi +given the inputs z and x. + +A common solution to the CVB algorithm is to compute each expectation term +by using simple Gaussian approximation which is accurate and requires low +computational overhead. The specifics behind the approximation involve +computing the sum of the means and variances of the individual Bernoulli +variables. + +CVB with Gaussian approximation is implemented by tracking the mean and +variance and subtracting the mean and variance of the corresponding +Bernoulli variables. The computational cost for the algorithm scales on +the order of O(K) with each update to q(z(i,j)). Also for each +document/word pair only 1 copy of the variational posterior is required +over the latent variable. + + +# Invocation and Usage + +Mahout's implementation of LDA operates on a collection of SparseVectors of +word counts. These word counts should be non-negative integers, though +things will-- probably --work fine if you use non-negative reals. (Note +that the probabilistic model doesn't make sense if you do!) To create these +vectors, it's recommended that you follow the instructions in [Creating Vectors From Text](../basics/creating-vectors-from-text.html) +, making sure to use TF and not TFIDF as the scorer. + +Invocation takes the form: + + + bin/mahout cvb \ + -i \ + -dict \ + -o + -dt \ + -k \ + -nt \ + -mt \ + -maxIter \ + -mipd \ + -a \ + -e \ + -seed \ + -tf \ + -block 0> \ + + +Topic smoothing should generally be about 50/K, where K is the number of +topics. The number of words in the vocabulary can be an upper bound, though +it shouldn't be too high (for memory concerns). + +Choosing the number of topics is more art than science, and it's +recommended that you try several values. + +After running LDA you can obtain an output of the computed topics using the +LDAPrintTopics utility: + + + bin/mahout ldatopics \ + -i \ + -d \ + -w \ + -o \ + -h \ + -dt + + + + +# Example + +An example is located in mahout/examples/bin/build-reuters.sh. The script +automatically downloads the Reuters-21578 corpus, builds a Lucene index and +converts the Lucene index to vectors. By uncommenting the last two lines +you can then cause it to run LDA on the vectors and finally print the +resultant topics to the console. + +To adapt the example yourself, you should note that Lucene has specialized +support for Reuters, and that building your own index will require some +adaptation. The rest should hopefully not differ too much. + + +# Parameter Estimation + +We use mean field variational inference to estimate the models. Variational +inference can be thought of as a generalization of [EM](expectation-maximization.html) + for hierarchical Bayesian models. The E-Step takes the form of, for each +document, inferring the posterior probability of each topic for each word +in each document. We then take the sufficient statistics and emit them in +the form of (log) pseudo-counts for each word in each topic. The M-Step is +simply to sum these together and (log) normalize them so that we have a +distribution over the entire vocabulary of the corpus for each topic. + +In implementation, the E-Step is implemented in the Map, and the M-Step is +executed in the reduce step, with the final normalization happening as a +post-processing step. + + +# References + +[David M. Blei, Andrew Y. Ng, Michael I. Jordan, John Lafferty. 2003. Latent Dirichlet Allocation. JMLR.](-http://machinelearning.wustl.edu/mlpapers/paper_files/BleiNJ03.pdf) + +[Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. PNAS. ](http://psiexp.ss.uci.edu/research/papers/sciencetopics.pdf) + +[David Hall, Dan Jurafsky, and Christopher D. Manning. 2008. Studying the History of Ideas Using Topic Models ](-http://aclweb.org/anthology//D/D08/D08-1038.pdf) http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/algorithms/map-reduce/clustering/llr---log-likelihood-ratio.md ---------------------------------------------------------------------- diff --git a/website/docs/algorithms/map-reduce/clustering/llr---log-likelihood-ratio.md b/website/docs/algorithms/map-reduce/clustering/llr---log-likelihood-ratio.md new file mode 100644 index 0000000..ed09c5b --- /dev/null +++ b/website/docs/algorithms/map-reduce/clustering/llr---log-likelihood-ratio.md @@ -0,0 +1,46 @@ +--- +layout: mr_algorithm +title: LLR - Log-likelihood Ratio +theme: + name: retro-mahout +--- + +# Likelihood ratio test + +_Likelihood ratio test is used to compare the fit of two models one +of which is nested within the other._ + +In the context of machine learning and the Mahout project in particular, +the term LLR is usually meant to refer to a test of significance for two +binomial distributions, also known as the G squared statistic. This is a +special case of the multinomial test and is closely related to mutual +information. The value of this statistic is not normally used in this +context as a true frequentist test of significance since there would be +obvious and dreadful problems to do with multiple comparisons, but rather +as a heuristic score to order pairs of items with the most interestingly +connected items having higher scores. In this usage, the LLR has proven +very useful for discriminating pairs of features that have interesting +degrees of cooccurrence and those that do not with usefully small false +positive and false negative rates. The LLR is typically far more suitable +in the case of small than many other measures such as Pearson's +correlation, Pearson's chi squared statistic or z statistics. The LLR as +stated does not, however, make any use of rating data which can limit its +applicability in problems such as the Netflix competition. + +The actual value of the LLR is not usually very helpful other than as a way +of ordering pairs of items. As such, it is often used to determine a +sparse set of coefficients to be estimated by other means such as TF-IDF. +Since the actual estimation of these coefficients can be done in a way that +is independent of the training data such as by general corpus statistics, +and since the ordering imposed by the LLR is relatively robust to counting +fluctuation, this technique can provide very strong results in very sparse +problems where the potential number of features vastly out-numbers the +number of training examples and where features are highly interdependent. + + See Also: + +* [Blog post "surprise and coincidence"](http://tdunning.blogspot.com/2008/03/surprise-and-coincidence.html) +* [G-Test](http://en.wikipedia.org/wiki/G-test) +* [Likelihood Ratio Test](http://en.wikipedia.org/wiki/Likelihood-ratio_test) + + \ No newline at end of file http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/algorithms/map-reduce/clustering/spectral-clustering.md ---------------------------------------------------------------------- diff --git a/website/docs/algorithms/map-reduce/clustering/spectral-clustering.md b/website/docs/algorithms/map-reduce/clustering/spectral-clustering.md new file mode 100644 index 0000000..b6b5d57 --- /dev/null +++ b/website/docs/algorithms/map-reduce/clustering/spectral-clustering.md @@ -0,0 +1,84 @@ +--- +layout: mr_algorithm +title: Spectral Clustering +theme: + name: retro-mahout +--- + +# Spectral Clustering Overview + +Spectral clustering, as its name implies, makes use of the spectrum (or eigenvalues) of the similarity matrix of the data. It examines the _connectedness_ of the data, whereas other clustering algorithms such as k-means use the _compactness_ to assign clusters. Consequently, in situations where k-means performs well, spectral clustering will also perform well. Additionally, there are situations in which k-means will underperform (e.g. concentric circles), but spectral clustering will be able to segment the underlying clusters. Spectral clustering is also very useful for image segmentation. + +At its simplest, spectral clustering relies on the following four steps: + + 1. Computing a similarity (or _affinity_) matrix `\(\mathbf{A}\)` from the data. This involves determining a pairwise distance function `\(f\)` that takes a pair of data points and returns a scalar. + + 2. Computing a graph Laplacian `\(\mathbf{L}\)` from the affinity matrix. There are several types of graph Laplacians; which is used will often depends on the situation. + + 3. Computing the eigenvectors and eigenvalues of `\(\mathbf{L}\)`. The degree of this decomposition is often modulated by `\(k\)`, or the number of clusters. Put another way, `\(k\)` eigenvectors and eigenvalues are computed. + + 4. The `\(k\)` eigenvectors are used as "proxy" data for the original dataset, and fed into k-means clustering. The resulting cluster assignments are transparently passed back to the original data. + +For more theoretical background on spectral clustering, such as how affinity matrices are computed, the different types of graph Laplacians, and whether the top or bottom eigenvectors and eigenvalues are computed, please read [Ulrike von Luxburg's article in _Statistics and Computing_ from December 2007](http://link.springer.com/article/10.1007/s11222-007-9033-z). It provides an excellent description of the linear algebra operations behind spectral clustering, and imbues a thorough understanding of the types of situations in which it can be used. + +# Mahout Spectral Clustering + +As of Mahout 0.3, spectral clustering has been implemented to take advantage of the MapReduce framework. It uses [SSVD](http://mahout.apache.org/users/dim-reduction/ssvd.html) for dimensionality reduction of the input data set, and [k-means](http://mahout.apache.org/users/clustering/k-means-clustering.html) to perform the final clustering. + +**([MAHOUT-1538](https://issues.apache.org/jira/browse/MAHOUT-1538) will port the existing Hadoop MapReduce implementation to Mahout DSL, allowing for one of several distinct distributed back-ends to conduct the computation)** + +## Input + +The input format for the algorithm currently takes the form of a Hadoop-backed affinity matrix in the form of text files. Each line of the text file specifies a single element of the affinity matrix: the row index `\(i\)`, the column index `\(j\)`, and the value: + +`i, j, value` + +The affinity matrix is symmetric, and any unspecified `\(i, j\)` pairs are assumed to be 0 for sparsity. The row and column indices are 0-indexed. Thus, only the non-zero entries of either the upper or lower triangular need be specified. + +The matrix elements specified in the text files are collected into a Mahout `DistributedRowMatrix`. + +**([MAHOUT-1539](https://issues.apache.org/jira/browse/MAHOUT-1539) will allow for the creation of the affinity matrix to occur as part of the core spectral clustering algorithm, as opposed to the current requirement that the user create this matrix themselves and provide it, rather than the original data, to the algorithm)** + +## Running spectral clustering + +**([MAHOUT-1540](https://issues.apache.org/jira/browse/MAHOUT-1540) will provide a running example of this algorithm and this section will be updated to show how to run the example and what the expected output should be; until then, this section provides a how-to for simply running the algorithm on arbitrary input)** + +Spectral clustering can be invoked with the following arguments. + + bin/mahout spectralkmeans \ + -i \ + -o \ + -d \ + -k \ + -x + +The affinity matrix can be contained in a single text file (using the aforementioned one-line-per-entry format) or span many text files [per (MAHOUT-978](https://issues.apache.org/jira/browse/MAHOUT-978), do not prefix text files with a leading underscore '_' or period '.'). The `-d` flag is required for the algorithm to know the dimensions of the affinity matrix. `-k` is the number of top eigenvectors from the normalized graph Laplacian in the SSVD step, and also the number of clusters given to k-means after the SSVD step. + +## Example + +To provide a simple example, take the following affinity matrix, contained in a text file called `affinity.txt`: + + 0, 0, 0 + 0, 1, 0.8 + 0, 2, 0.5 + 1, 0, 0.8 + 1, 1, 0 + 1, 2, 0.9 + 2, 0, 0.5 + 2, 1, 0.9 + 2, 2, 0 + +With this 3-by-3 matrix, `-d` would be `3`. Furthermore, since all affinity matrices are assumed to be symmetric, the entries specifying both `1, 2, 0.9` and `2, 1, 0.9` are redundant; only one of these is needed. Additionally, any entries that are 0, such as those along the diagonal, also need not be specified at all. They are provided here for completeness. + +In general, larger values indicate a stronger "connectedness", whereas smaller values indicate a weaker connectedness. This will vary somewhat depending on the distance function used, though a common one is the [RBF kernel](http://en.wikipedia.org/wiki/RBF_kernel) (used in the above example) which returns values in the range [0, 1], where 0 indicates completely disconnected (or completely dissimilar) and 1 is fully connected (or identical). + +The call signature with this matrix could be as follows: + + bin/mahout spectralkmeans \ + -i s3://mahout-example/input/ \ + -o s3://mahout-example/output/ \ + -d 3 \ + -k 2 \ + -x 10 + +There are many other optional arguments, in particular for tweaking the SSVD process (block size, number of power iterations, etc) and the k-means clustering step (distance measure, convergence delta, etc). \ No newline at end of file http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/algorithms/map-reduce/clustering/streaming-k-means.md ---------------------------------------------------------------------- diff --git a/website/docs/algorithms/map-reduce/clustering/streaming-k-means.md b/website/docs/algorithms/map-reduce/clustering/streaming-k-means.md new file mode 100644 index 0000000..389720d --- /dev/null +++ b/website/docs/algorithms/map-reduce/clustering/streaming-k-means.md @@ -0,0 +1,174 @@ +--- +layout: mr_algorithm +title: Spectral Clustering +theme: + name: retro-mahout +--- + +# *StreamingKMeans* algorithm + +The *StreamingKMeans* algorithm is a variant of Algorithm 1 from [Shindler et al][1] and consists of two steps: + + 1. Streaming step + 2. BallKMeans step. + +The streaming step is a randomized algorithm that makes one pass through the data and +produces as many centroids as it determines is optimal. This step can be viewed as +a preparatory dimensionality reduction. If the size of the data stream is *n* and the +expected number of clusters is *k*, the streaming step will produce roughly *k\*log(n)* +clusters that will be passed on to the BallKMeans step which will further reduce the +number of clusters down to *k*. BallKMeans is a randomized Lloyd-type algorithm that +has been studied in detail, see [Ostrovsky et al][2]. + +## Streaming step + +--- + +### Overview + +The streaming step is a derivative of the streaming +portion of Algorithm 1 in [Shindler et al][1]. The main difference between the two is that +Algorithm 1 of [Shindler et al][1] assumes +the knowledge of the size of the data stream and uses it to set a key parameter +for the algorithm. More precisely, the initial *distanceCutoff* (defined below), which is +denoted by *f* in [Shindler et al][1], is set to *1/(k(1+log(n))*. The *distanceCutoff* influences the number of clusters that the algorithm +will produce. +In contrast, Mahout implementation does not require the knowledge of the size of the +data stream. Instead, it dynamically re-evaluates the parameters that depend on the size +of the data stream at runtime as more and more data is processed. In particular, +the parameter *numClusters* (defined below) changes its value as the data is processed. + +###Parameters + + - **numClusters** (int): Conceptually, *numClusters* represents the algorithm's guess at the optimal +number of clusters it is shooting for. In particular, *numClusters* will increase at run +time as more and more data is processed. Note that •numClusters• is not the number of clusters that the algorithm will produce. Also, *numClusters* should not be set to the final number of clusters that we expect to receive as the output of *StreamingKMeans*. + - **distanceCutoff** (double): a parameter representing the value of the distance between a point and +its closest centroid after which +the new point will definitely be assigned to a new cluster. *distanceCutoff* can be thought +of as an estimate of the variable *f* from Shindler et al. The default initial value for +*distanceCutoff* is *1.0/numClusters* and *distanceCutoff* grows as a geometric progression with +common ratio *beta* (see below). + - **beta** (double): a constant parameter that controls the growth of *distanceCutoff*. If the initial setting of *distanceCutoff* is *d0*, *distanceCutoff* will grow as the geometric progression with initial term *d0* and common ratio *beta*. The default value for *beta* is 1.3. + - **clusterLogFactor** (double): a constant parameter such that *clusterLogFactor* *log(numProcessedPoints)* is the runtime estimate of the number of clusters to be produced by the streaming step. If the final number of clusters (that we expect *StreamingKMeans* to output) is *k*, *clusterLogFactor* can be set to *k*. + - **clusterOvershoot** (double): a constant multiplicative slack factor that slows down the collapsing of clusters. The default value is 2. + + +###Algorithm + +The algorithm processes the data one-by-one and makes only one pass through the data. +The first point from the data stream will form the centroid of the first cluster (this designation may change as more points are processed). Suppose there are *r* clusters at one point and a new point *p* is being processed. The new point can either be added to one of the existing *r* clusters or become a new cluster. To decide: + + - let *c* be the closest cluster to point *p* + - let *d* be the distance between *c* and *p* + - if *d > distanceCutoff*, create a new cluster from *p* (*p* is too far away from the clusters to be part of any one of them) + - else (*d <= distanceCutoff*), create a new cluster with probability *d / distanceCutoff* (the probability of creating a new cluster increases as *d* increases). + +There will be either *r* or *r+1* clusters after processing a new point. + +As the number of clusters increases, it will go over the *clusterOvershoot \* numClusters* limit (*numClusters* represents a recommendation for the number of clusters that the streaming step should aim for and *clusterOvershoot* is the slack). To decrease the number of clusters the existing clusters +are treated as data points and are re-clustered (collapsed). This tends to make the number of clusters go down. If the number of clusters is still too high, *distanceCutoff* is increased. + +## BallKMeans step +--- +### Overview +The algorithm is a Lloyd-type algorithm that takes a set of weighted vectors and returns k centroids, see [Ostrovsky et al][2] for details. The algorithm has two stages: + + 1. Seeding + 2. Ball k-means + +The seeding stage is an initial guess of where the centroids should be. The initial guess is improved using the ball k-means stage. + +### Parameters + +* **numClusters** (int): the number k of centroids to return. The algorithm will return exactly this number of centroids. + +* **maxNumIterations** (int): After seeding, the iterative clustering procedure will be run at most *maxNumIterations* times. 1 or 2 iterations are recommended. Increasing beyond this will increase the accuracy of the result at the expense of runtime. Each successive iteration yields diminishing returns in lowering the cost. + +* **trimFraction** (double): Outliers are ignored when computing the center of mass for a cluster. For any datapoint *x*, let *c* be the nearest centroid. Let *d* be the minimum distance from *c* to another centroid. If the distance from *x* to *c* is greater than *trimFraction \* d*, then *x* is considered an outlier during that iteration of ball k-means. The default is 9/10. In [Ostrovsky et al][2], the authors use *trimFraction* = 1/3, but this does not mean that 1/3 is optimal in practice. + +* **kMeansPlusPlusInit** (boolean): If true, the seeding method is k-means++. If false, the seeding method is to select points uniformly at random. The default is true. + +* **correctWeights** (boolean): If *correctWeights* is true, outliers will be considered when calculating the weight of centroids. The default is true. Note that outliers are not considered when calculating the position of centroids. + +* **testProbability** (double): If *testProbability* is *p* (0 < *p* < 1), the data (of size n) is partitioned into a test set (of size *p\*n*) and a training set (of size *(1-p)\*n*). If 0, no test set is created (the entire data set is used for both training and testing). The default is 0.1 if *numRuns* > 1. If *numRuns* = 1, then no test set should be created (since it is only used to compare the cost between different runs). + +* **numRuns** (int): This is the number of runs to perform. The solution of lowest cost is returned. The default is 1 run. + +###Algorithm +The algorithm can be instructed to take multiple independent runs (using the *numRuns* parameter) and the algorithm will select the best solution (i.e., the one with the lowest cost). In practice, one run is sufficient to find a good solution. + +Each run operates as follows: a seeding procedure is used to select k centroids, and then ball k-means is run iteratively to refine the solution. + +The seeding procedure can be set to either 'uniformly at random' or 'k-means++' using *kMeansPlusPlusInit* boolean variable. Seeding with k-means++ involves more computation but offers better results in practice. + +Each iteration of ball k-means runs as follows: + +1. Clusters are formed by assigning each datapoint to the nearest centroid +2. The centers of mass of the trimmed clusters (see *trimFraction* parameter above) become the new centroids + +The data may be partitioned into a test set and a training set (see *testProbability*). The seeding procedure and ball k-means run on the training set. The cost is computed on the test set. + + +##Usage of *StreamingKMeans* + + bin/mahout streamingkmeans + -i + -o + -ow + -k + -km + -e + -mi + -tf + -ri + -iw + -testp + -nbkm + -dm + -sc + -np + -s + -rskm + -xm + -h + --tempDir + --startPhase + --endPhase + + +###Details on Job-Specific Options: + + * `--input (-i) `: Path to job input directory. + * `--output (-o) `: The directory pathname for output. + * `--overwrite (-ow)`: If present, overwrite the output directory before running job. + * `--numClusters (-k) `: The k in k-Means. Approximately this many clusters will be generated. + * `--estimatedNumMapClusters (-km) `: The estimated number of clusters to use for the Map phase of the job when running StreamingKMeans. This should be around k \* log(n), where k is the final number of clusters and n is the total number of data points to cluster. + * `--estimatedDistanceCutoff (-e) `: The initial estimated distance cutoff between two points for forming new clusters. If no value is given, it's estimated from the data set + * `--maxNumIterations (-mi) `: The maximum number of iterations to run for the BallKMeans algorithm used by the reducer. If no value is given, defaults to 10. + * `--trimFraction (-tf) `: The 'ball' aspect of ball k-means means that only the closest points to the centroid will actually be used for updating. The fraction of the points to be used is those points whose distance to the center is within trimFraction \* distance to the closest other center. If no value is given, defaults to 0.9. + * `--randomInit` (`-ri`) Whether to use k-means++ initialization or random initialization of the seed centroids. Essentially, k-means++ provides better clusters, but takes longer, whereas random initialization takes less time, but produces worse clusters, and tends to fail more often and needs multiple runs to compare to k-means++. If set, uses the random initialization. + * `--ignoreWeights (-iw)`: Whether to correct the weights of the centroids after the clustering is done. The weights end up being wrong because of the trimFraction and possible train/test splits. In some cases, especially in a pipeline, having an accurate count of the weights is useful. If set, ignores the final weights. + * `--testProbability (-testp) `: A double value between 0 and 1 that represents the percentage of points to be used for 'testing' different clustering runs in the final BallKMeans step. If no value is given, defaults to 0.1 + * `--numBallKMeansRuns (-nbkm) `: Number of BallKMeans runs to use at the end to try to cluster the points. If no value is given, defaults to 4 + * `--distanceMeasure (-dm) `: The classname of the DistanceMeasure. Default is SquaredEuclidean. + * `--searcherClass (-sc) `: The type of searcher to be used when performing nearest neighbor searches. Defaults to ProjectionSearch. + * `--numProjections (-np) `: The number of projections considered in estimating the distances between vectors. Only used when the distance measure requested is either ProjectionSearch or FastProjectionSearch. If no value is given, defaults to 3. + * `--searchSize (-s) `: In more efficient searches (non BruteSearch), not all distances are calculated for determining the nearest neighbors. The number of elements whose distances from the query vector is actually computer is proportional to searchSize. If no value is given, defaults to 1. + * `--reduceStreamingKMeans (-rskm)`: There might be too many intermediate clusters from the mapper to fit into memory, so the reducer can run another pass of StreamingKMeans to collapse them down to a fewer clusters. + * `--method (-xm)` method The execution method to use: sequential or mapreduce. Default is mapreduce. + * `-- help (-h)`: Print out help + * `--tempDir `: Intermediate output directory. + * `--startPhase ` First phase to run. + * `--endPhase ` Last phase to run. + + +##References + +1. [M. Shindler, A. Wong, A. Meyerson: Fast and Accurate k-means For Large Datasets][1] +2. [R. Ostrovsky, Y. Rabani, L. Schulman, Ch. Swamy: The Effectiveness of Lloyd-Type Methods for the k-means Problem][2] + + +[1]: http://nips.cc/Conferences/2011/Program/event.php?ID=2989 "M. Shindler, A. Wong, A. Meyerson: Fast and Accurate k-means For Large Datasets" + +[2]: http://www.math.uwaterloo.ca/~cswamy/papers/kmeansfnl.pdf "R. Ostrovsky, Y. Rabani, L. Schulman, Ch. Swamy: The Effectiveness of Lloyd-Type Methods for the k-means Problem" http://git-wip-us.apache.org/repos/asf/mahout/blob/516e3fb9/website/docs/tutorials/map-reduce/clustering/20newsgroups.md ---------------------------------------------------------------------- diff --git a/website/docs/tutorials/map-reduce/clustering/20newsgroups.md b/website/docs/tutorials/map-reduce/clustering/20newsgroups.md new file mode 100644 index 0000000..379e8b3 --- /dev/null +++ b/website/docs/tutorials/map-reduce/clustering/20newsgroups.md @@ -0,0 +1,11 @@ +--- +layout: mr_tutorial +title: 20Newsgroups +theme: + name: retro-mahout +--- + + +# Naive Bayes using 20 Newsgroups Data + +See [https://issues.apache.org/jira/browse/MAHOUT-9](https://issues.apache.org/jira/browse/MAHOUT-9)