mahout-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From conflue...@apache.org
Subject [CONF] Apache Lucene Mahout > k-Means
Date Sun, 29 Nov 2009 12:00:00 GMT
Space: Apache Lucene Mahout (http://cwiki.apache.org/confluence/display/MAHOUT)
Page: k-Means (http://cwiki.apache.org/confluence/display/MAHOUT/k-Means)

Change Comment:
---------------------------------------------------------------------
Appended another dataflow diagram

Edited by Peter Wippermann:
---------------------------------------------------------------------
h1. kMeans

k-Means is a rather simple but well known algorithms for grouping objects, clustering. Again
all objects need to be represented as a set of numerical features. In addition the user has
to specify the number of groups (referred to as _k_) he wishes to identify.
Each object can be thought of as being represented by some feature vector in an _n_ dimensional
space, _n_ being the number of all features used to describe the objects to cluster. The algorithm
than randomly chooses _k_ points in that vector space, these point serve as the initial centers
of the clusters. Afterwards all objects are each assigned to center they are closest to. Usually
the distance measure is chosen by the user and determined by the learning task.
After that, for each cluster a new center is computed by averaging the feature vectors of
all objects assigned to it. The process of assigning objects and recomputing centers is repeated
until the process converges. The algorithm can be proven to converge after a finite number
of iterations.
Several tweaks concerning distance measure, initial center choice and computation of new average
centers have been explored, as well as the estimation of the number of clusters _k_. Yet the
main principle always remains the same.



h2. Strategy for parallelization

Some ideas can be found in [Cluster computing and MapReduce|http://code.google.com/edu/content/submissions/mapreduce-minilecture/listing.html]
lecture video series \[by Google(r)\]; k-Mean clustering is discussed in [lecture #4|http://www.youtube.com/watch?v=1ZDybXl212Q].
Slides can be found [here|http://code.google.com/edu/content/submissions/mapreduce-minilecture/lec4-clustering.ppt].

Interestingly, Hadoop based implementation using [Canopy-clustering|http://en.wikipedia.org/wiki/Canopy_clustering_algorithm]
seems to be here: [http://code.google.com/p/canopy-clustering/] (GPL 3 licence)

Here's another useful paper [http://www2.chass.ncsu.edu/garson/PA765/cluster.htm].

h2. Design of implementation

The initial implementation in MAHOUT-5 accepts two input directories: one for the data points
and one for the initial clusters. The data directory contains multiple input files containing
dense vectors of Java type Float[] encoded as "\[v1, v2, v3, ..., vn, \]", while the clusters
directory contains a single file 'part-00000' which is in SequenceFile format and contains
all of the initial cluster centers encoded as "Cn - \[c1, c2, ..., cn, \]. None of the input
directories are modified by the implementation, allowing experimentation with initial clustering
and convergence values.

The program iterates over the input points and clusters, outputting a new directory "clusters-N"
containing a cluster center file "part-00000" for each iteration N. This process uses a mapper/combiner/reducer/driver
as follows:
  * KMeansMapper - reads the input clusters during its configure() method, then assigns and
outputs each input point to its nearest cluster as defined by the user-supplied distance measure.
Output key is: encoded cluster. Output value is: input point.
  * KMeansCombiner - receives all key:value pairs from the mapper and produces partial sums
of the input vectors for each cluster. Output key is: encoded cluster. Output value is "<number
of points in partial sum>, <partial sum vector summing all such points>".
  * KMeansReducer - a single reducer receives all key:value pairs from all combiners and sums
them to produce a new centroid for the cluster which is output. Output key is: encoded cluster
identifier (e.g. "C14". Output value is: formatted cluster (e.g. "C14 - \[c1, c2, ..., cn,
\]). The reducer encodes unconverged clusters with a 'Cn' cluster Id and converged clusters
with 'Vn' clusterId.
  * KMeansDriver - iterates over the points and clusters until all output clusters have converged
(Vn clusterIds) or until a maximum number of iterations has been reached. During iterations,
a new clusters directory "clusters-N" is produced with the output clusters from the previous
iteration used for input to the next. A final pass over the data using the KMeansMapper clusters
all points to an output directory "points" and has no combiner or reducer steps.

With the latest diff (MAHOUT-5c and newer), Canopy clustering can be used to compute the initial
clusters for KMeans:
{quote}
    // now run the CanopyDriver job
    CanopyDriver.runJob("testdata/points", "testdata/canopies"
        ManhattanDistanceMeasure.class.getName(), (float) 3.1, (float) 2.1,
        "dist/apache-mahout-0.1-dev.jar");

    // now run the KMeansDriver job
    KMeansDriver.runJob("testdata/points", "testdata/canopies", "output",
        EuclideanDistanceMeasure.class.getName(), "0.001", "10");
{quote}

In the above example, the input data points are stored in 'testdata/points' and the CanopyDriver
is configured to output to the 'testdata/canopies' directory. Once the driver executes it
will contain the canopy definition file. Upon running the KMeansDriver the output directory
will have two or more new directories: 'clusters-N'' containining the clusters for each iteration
and 'points' will contain the clustered data points.

{gliffy:name=k-Means Example|space=MAHOUT|page=k-Means|align=left|size=L}
!k-means in mahout|align=right!

Change your notification preferences: http://cwiki.apache.org/confluence/users/viewnotifications.action
   

Mime
View raw message