mahout-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From conflue...@apache.org
Subject [CONF] Apache Mahout > K-Means Clustering
Date Sun, 10 Jul 2011 00:47:01 GMT
Space: Apache Mahout (https://cwiki.apache.org/confluence/display/MAHOUT)
Page: K-Means Clustering (https://cwiki.apache.org/confluence/display/MAHOUT/K-Means+Clustering)

Change Comment:
---------------------------------------------------------------------


Edited by Sabari Ajay Kumar:
---------------------------------------------------------------------
k-Means is a rather simple but well known algorithms for grouping objects, clustering. Again
all objects need to be represented as a set of numerical features. In addition the user has
to specify the number of groups (referred to as _k_) he wishes to identify.
Each object can be thought of as being represented by some feature vector in an _n_ dimensional
space, _n_ being the number of all features used to describe the objects to cluster. The algorithm
then randomly chooses _k_ points in that vector space, these point serve as the initial centers
of the clusters. Afterwards all objects are each assigned to center they are closest to. Usually
the distance measure is chosen by the user and determined by the learning task.
After that, for each cluster a new center is computed by averaging the feature vectors of
all objects assigned to it. The process of assigning objects and recomputing centers is repeated
until the process converges. The algorithm can be proven to converge after a finite number
of iterations.
Several tweaks concerning distance measure, initial center choice and computation of new average
centers have been explored, as well as the estimation of the number of clusters _k_. Yet the
main principle always remains the same.



h2. Quickstart

[Here|K-Means Clustering^quickstart-kmeans.sh] is a short shell script outline that will get
you started quickly with k-Means. This does the following:

* Get the Reuters dataset
* Run org.apache.lucene.benchmark.utils.ExtractReuters to generate reuters-out from reuters-sgm(the
downloaded archive)
* Run seqdirectory to convert reuters-out to SequenceFile format
* Run seq2sparse to convert SequenceFiles to sparse vector format
* Finally, run kMeans with 20 clusters.

After following through the output that scrolls past, reading the code will offer you a better
understanding.


h2. Strategy for parallelization

Some ideas can be found in [Cluster computing and MapReduce|http://code.google.com/edu/content/submissions/mapreduce-minilecture/listing.html]
lecture video series \[by Google(r)\]; k-Means clustering is discussed in [lecture #4|http://www.youtube.com/watch?v=1ZDybXl212Q].
Slides can be found [here|http://code.google.com/edu/content/submissions/mapreduce-minilecture/lec4-clustering.ppt].

Interestingly, Hadoop based implementation using [Canopy-clustering|http://en.wikipedia.org/wiki/Canopy_clustering_algorithm]
seems to be here: [http://code.google.com/p/canopy-clustering/] (GPL 3 licence)

Here's another useful paper [http://www2.chass.ncsu.edu/garson/PA765/cluster.htm].

h2. Design of implementation

The implementation accepts two input directories: one for the data points and one for the
initial clusters. The data directory contains multiple input files of SequenceFile(key, VectorWritable),
while the clusters directory contains one or more SequenceFiles(Text, Cluster | Canopy) containing
_k_ initial clusters or canopies. None of the input directories are modified by the implementation,
allowing experimentation with initial clustering and convergence values.

The program iterates over the input points and clusters, outputting a new directory "clusters-N"
containing SequenceFile(Text, Cluster) files for each iteration N. This process uses a mapper/combiner/reducer/driver
as follows:
* KMeansMapper - reads the input clusters during its setup() method, then assigns and outputs
each input point to its nearest cluster as defined by the user-supplied distance measure.
Output key is: cluster identifier. Output value is: ClusterObservation.
* KMeansCombiner - receives all key:value pairs from the mapper and produces partial sums
of the input vectors for each cluster. Output key is: cluster identifier. Output value is
ClusterObservation.
* KMeansReducer - a single reducer receives all key:value pairs from all combiners and sums
them to produce a new centroid for the cluster which is output. Output key is: encoded cluster
identifier. Output value is: Cluster. The reducer encodes unconverged clusters with a 'Cn'
cluster Id and converged clusters with 'Vn' clusterId.
* KMeansDriver - iterates over the points and clusters until all output clusters have converged
(Vn clusterIds) or until a maximum number of iterations has been reached. During iterations,
a new clusters directory "clusters-N" is produced with the output clusters from the previous
iteration used for input to the next. A final optional pass over the data using the KMeansClusterMapper
clusters all points to an output directory "clusteredPoints" and has no combiner or reducer
steps.

Canopy clustering can be used to compute the initial clusters for k-KMeans:
{quote}
// run the CanopyDriver job
CanopyDriver.runJob("testdata", "output" ManhattanDistanceMeasure.class.getName(), (float)
3.1, (float) 2.1, false);

// now run the KMeansDriver job
KMeansDriver.runJob("testdata", "output/clusters-0", "output", EuclideanDistanceMeasure.class.getName(),
"0.001", "10", true);
{quote}

In the above example, the input data points are stored in 'testdata' and the CanopyDriver
is configured to output to the 'output/clusters-0' directory. Once the driver executes it
will contain the canopy definition files. Upon running the KMeansDriver the output directory
will have two or more new directories: 'clusters-N'' containining the clusters for each iteration
and 'clusteredPoints' will contain the clustered data points.

This diagram shows the examplary dataflow of the k-Means example implementation provided by
Mahout:
{gliffy:name=Example implementation of k-Means provided with Mahout|space=MAHOUT|page=k-Means|pageid=75159|align=left|size=L|version=7}

This diagram doesn't consider CanopyClustering:
{gliffy:name=k-Means Example|space=MAHOUT|page=k-Means|align=left|size=L}

h2. Running k-Means Clustering

The k-Means clustering algorithm may be run using a command-line invocation on KMeansDriver.main
or by making a Java call to KMeansDriver.runJob(). 

Invocation using the command line takes the form:

{noformat}
bin/mahout kmeans \
    -i <input vectors directory> \
    -c <input clusters directory> \
    -o <output working directory> \
    -k <optional number of initial clusters to sample from input vectors> \
    -dm <DistanceMeasure> \
    -x <maximum number of iterations> \
    -cd <optional convergence delta. Default is 0.5> \
    -ow <overwrite output directory if present>
    -cl <run input vector clustering after computing Canopies>
    -xm <execution method: sequential or mapreduce>
{noformat}

Note: if the -k argument is supplied, any clusters in the -c directory will be overwritten
and -k random points will be sampled from the input vectors to become the initial cluster
centers.

Invocation using Java involves supplying the following arguments:

# input: a file path string to a directory containing the input data set a SequenceFile(WritableComparable,
VectorWritable). The sequence file _key_ is not used.
# clusters: a file path string to a directory containing the initial clusters, a SequenceFile(key,
Cluster | Canopy). Both KMeans clusters and Canopy canopies may be used for the initial clusters.
# output: a file path string to an empty directory which is used for all output from the algorithm.
# distanceMeasure: the fully-qualified class name of an instance of DistanceMeasure which
will be used for the clustering.
# convergenceDelta: a double value used to determine if the algorithm has converged (clusters
have not moved more than the value in the last iteration)
# maxIter: the maximum number of iterations to run, independent of the convergence specified
# runClustering: a boolean indicating, if true, that the clustering step is to be executed
after clusters have been determined.
# runSequential: a boolean indicating, if true, that the k-means sequential implementation
is to be used to process the input data.

After running the algorithm, the output directory will contain:
# clusters-N: directories containing SequenceFiles(Text, Cluster) produced by the algorithm
for each iteration. The Text _key_ is a cluster identifier string.
# clusteredPoints: (if --clustering enabled) a directory containing SequenceFile(IntWritable,
WeightedVectorWritable). The IntWritable _key_ is the clusterId. The WeightedVectorWritable
_value_ is a bean containing a double _weight_ and a VectorWritable _vector_ where the weight
indicates the probability that the vector is a member of the cluster. For k-Means clustering,
the weights are all 1.0 since the algorithm selects only a single, most likely cluster for
each point.

h1. Examples

The following images illustrate k-Means clustering applied to a set of randomly-generated
2-d data points. The points are generated using a normal distribution centered at a mean location
and with a constant standard deviation. See the README file in the [/examples/src/main/java/org/apache/mahout/clustering/display/README.txt|http://svn.apache.org/repos/asf/mahout/trunk/examples/src/main/java/org/apache/mahout/clustering/display/README.txt]
for details on running similar examples.

The points are generated as follows:

* 500 samples m=\[1.0, 1.0\] sd=3.0
* 300 samples m=\[1.0, 0.0\] sd=0.5
* 300 samples m=\[0.0, 2.0\] sd=0.1

In the first image, the points are plotted and the 3-sigma boundaries of their generator are
superimposed. 

!SampleData.png!

In the second image, the resulting clusters (k=3) are shown superimposed upon the sample data.
As k-Means is an iterative algorithm, the centers of the clusters in each recent iteration
are shown using different colors. Bold red is the final clustering and previous iterations
are shown in \[orange, yellow, green, blue, violet and gray\]. Although it misses a lot of
the points and cannot capture the original, superimposed cluster centers, it does a decent
job of clustering this data.

!KMeans.png!

The third image shows the results of running k-Means on a different data set (see [Dirichlet
Process Clustering] for details) which is generated using asymmetrical standard deviations.
K-Means does a fair job handling this data set as well.

!2dKMeans.png!

Change your notification preferences: https://cwiki.apache.org/confluence/users/viewnotifications.action
   

Mime
View raw message