Space: Apache Lucene Mahout (http://cwiki.apache.org/confluence/display/MAHOUT)
Page: kMeans (http://cwiki.apache.org/confluence/display/MAHOUT/kMeans)
Edited by Jeff Eastman:

h1. kMeans
kMeans is a rather simple but well known algorithms for grouping objects, clustering. Again
all objects need to be represented as a set of numerical features. In addition the user has
to specify the number of groups (referred to as _k_) he wishes to identify.
Each object can be thought of as being represented by some feature vector in an _n_ dimensional
space, _n_ being the number of all features used to describe the objects to cluster. The algorithm
then randomly chooses _k_ points in that vector space, these point serve as the initial centers
of the clusters. Afterwards all objects are each assigned to center they are closest to. Usually
the distance measure is chosen by the user and determined by the learning task.
After that, for each cluster a new center is computed by averaging the feature vectors of
all objects assigned to it. The process of assigning objects and recomputing centers is repeated
until the process converges. The algorithm can be proven to converge after a finite number
of iterations.
Several tweaks concerning distance measure, initial center choice and computation of new average
centers have been explored, as well as the estimation of the number of clusters _k_. Yet the
main principle always remains the same.
h2. Quickstart
[Here^quickstartkmeans.sh] is a short shell script outline that will get you started quickly
with kMeans. This does the following:
* Get the Reuters dataset
* Run org.apache.lucene.benchmark.utils.ExtractReuters to generate reutersout from reuterssgm(the
downloaded archive)
* Run seqdirectory to convert reutersout to SequenceFile format
* Run seq2sparse to convert SequenceFiles to sparse vector format
* Finally, run kMeans with 20 clusters.
After following through the output that scrolls past, reading the code will offer you a better
understanding.
h2. Strategy for parallelization
Some ideas can be found in [Cluster computing and MapReducehttp://code.google.com/edu/content/submissions/mapreduceminilecture/listing.html]
lecture video series \[by Google(r)\]; kMeans clustering is discussed in [lecture #4http://www.youtube.com/watch?v=1ZDybXl212Q].
Slides can be found [herehttp://code.google.com/edu/content/submissions/mapreduceminilecture/lec4clustering.ppt].
Interestingly, Hadoop based implementation using [Canopyclusteringhttp://en.wikipedia.org/wiki/Canopy_clustering_algorithm]
seems to be here: [http://code.google.com/p/canopyclustering/] (GPL 3 licence)
Here's another useful paper [http://www2.chass.ncsu.edu/garson/PA765/cluster.htm].
h2. Design of implementation
The implementation accepts two input directories: one for the data points and one for the
initial clusters. The data directory contains multiple input files of SequenceFile(key, VectorWritable),
while the clusters directory contains one or more SequenceFiles(Text, Cluster  Canopy) containing
_k_ initial clusters or canopies. None of the input directories are modified by the implementation,
allowing experimentation with initial clustering and convergence values.
The program iterates over the input points and clusters, outputting a new directory "clustersN"
containing SequenceFile(Text, Cluster) files for each iteration N. This process uses a mapper/combiner/reducer/driver
as follows:
* KMeansMapper  reads the input clusters during its configure() method, then assigns and
outputs each input point to its nearest cluster as defined by the usersupplied distance measure.
Output key is: encoded cluster. Output value is: input point.
* KMeansCombiner  receives all key:value pairs from the mapper and produces partial sums
of the input vectors for each cluster. Output key is: encoded cluster. Output value is "<number
of points in partial sum>, <partial sum vector summing all such points>".
* KMeansReducer  a single reducer receives all key:value pairs from all combiners and sums
them to produce a new centroid for the cluster which is output. Output key is: encoded cluster
identifier (e.g. "C14". Output value is: formatted cluster (e.g. "C14  \[c1, c2, ..., cn,
\]). The reducer encodes unconverged clusters with a 'Cn' cluster Id and converged clusters
with 'Vn' clusterId.
* KMeansDriver  iterates over the points and clusters until all output clusters have converged
(Vn clusterIds) or until a maximum number of iterations has been reached. During iterations,
a new clusters directory "clustersN" is produced with the output clusters from the previous
iteration used for input to the next. A final optional pass over the data using the KMeansClusterMapper
clusters all points to an output directory "clusteredPoints" and has no combiner or reducer
steps.
Canopy clustering can be used to compute the initial clusters for kKMeans:
{quote}
// run the CanopyDriver job
CanopyDriver.runJob("testdata", "output" ManhattanDistanceMeasure.class.getName(), (float)
3.1, (float) 2.1, false);
// now run the KMeansDriver job
KMeansDriver.runJob("testdata", "output/clusters0", "output", EuclideanDistanceMeasure.class.getName(),
"0.001", "10", true);
{quote}
In the above example, the input data points are stored in 'testdata' and the CanopyDriver
is configured to output to the 'output/clusters0' directory. Once the driver executes it
will contain the canopy definition files. Upon running the KMeansDriver the output directory
will have two or more new directories: 'clustersN'' containining the clusters for each iteration
and 'clusteredPoints' will contain the clustered data points.
This diagram shows the examplary dataflow of the kMeans example implementation provided by
Mahout:
{gliffy:name=Example implementation of kMeans provided with Mahoutspace=MAHOUTpage=kMeanspageid=75159align=leftsize=L}
This diagram doesn't consider CanopyClustering:
{gliffy:name=kMeans Example\space=MAHOUT\page=kMeans\align=left\size=L}
h2. Running kMeans Clustering
The kMeans clustering algorithm may be run using a commandline invocation on KMeansDriver.main
or by making a Java call to KMeansDriver.runJob(). Both require several arguments:
# input: a file path string to a directory containing the input data set a SequenceFile(WritableComparable,
VectorWritable). The sequence file _key_ is not used.
# clustersIn: a file path string to a directory containing the initial clusters, a SequenceFile(key,
Cluster  Canopy). Both KMeans clusters and Canopy canopies may be used for the initial clusters.
# output: a file path string to an empty directory which is used for all output from the algorithm.
# measure: the fullyqualified class name of an instance of DistanceMeasure which will be
used for the clustering.
# convergence: a double value used to determine if the algorithm has converged (clusters have
not moved more than the value in the last iteration)
# maxiterations: the maximum number of iterations to run, independent of the convergence
specified
# numreducers: the number of reducer tasks to be launched. Each reducer will process a subset
of the clusters, in the limit, one per cluster.
# runClustering: a boolean indicating, if true, that the clustering step is to be executed
after clusters have been determined.
After running the algorithm, the output directory will contain:
# clustersN: directories containing SequenceFiles(Text, Cluster) produced by the algorithm
for each iteration. The Text _key_ is a cluster identifier string.
# clusteredPoints: (if runClustering enabled) a directory containing SequenceFile(IntWritable,
WeightedVectorWritable). The IntWritable _key_ is the clusterId. The WeightedVectorWritable
_value_ is a bean containing a double _weight_ and a VectorWritable _vector_ where the weight
indicates the probability that the vector is a member of the cluster. For kMeans clustering,
the weights are all 1.0 since the algorithm selects only a single, most likely cluster for
each point.
Change your notification preferences: http://cwiki.apache.org/confluence/users/viewnotifications.action
