Canopy Clustering (MAHOUT) edited by Jeff Eastman
Page: http://cwiki.apache.org/confluence/display/MAHOUT/Canopy+Clustering
Changes: http://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=75998&originalVersion=4&revisedVersion=5
Content:

h1. Canopy Clustering
[Canopy Clusteringhttp://www.kamalnigam.com/papers/canopykdd00.pdf] is a very simple, fast
and surprisingly accurate method for grouping objects into clusters. All objects are represented
as a point in a multidimensional feature space. The algorithm uses a fast approximate distance
metric and two distance thresholds T1 > T2 for processing. The basic algorithm is to begin
with a set of points and remove one at random. Create a Canopy containing this point and iterate
through the remainder of the point set. At each point, if its distance from the first point
is < T1, then add the point to the cluster. If, in addition, the distance is < T2, then
remove the point from the set. This way points that are very close to the original will avoid
all further processing. The algorithm loops until the initial set is empty, accumulating a
set of Canopies, each containing one or more points. A given point may occur in more than
one Canopy.
Canopy Clustering is often used as an initial step in more rigorous clustering techniques,
such as [kMeans]. By starting with an initial clustering the number of more expensive distance
measurements can be significantly reduced by ignoring points outside of the initial canopies.
h2. Strategy for parallelization
Looking at the sample Hadoop implementation in [http://code.google.com/p/canopyclustering/]
the processing is done in 3 M/R steps:
# The data is massaged into suitable input format
# Each mapper performs canopy clustering on the points in its input set and outputs its canopies'
centers
# The reducer clusters the canopy centers to produce the final canopy centers
# The points are then clustered into these final canopies
Some ideas can be found in [Cluster computing and MapReducehttp://code.google.com/edu/content/submissions/mapreduceminilecture/listing.html]
lecture video series \[by Google(r)\]; Canopy Clustering is discussed in [lecture #4http://www.youtube.com/watch?v=1ZDybXl212Q].
Slides can be found [herehttp://code.google.com/edu/content/submissions/mapreduceminilecture/lec4clustering.ppt].
Finally here is the [Wikipedia pagehttp://en.wikipedia.org/wiki/Canopy_clustering_algorithm].
h2. Design of implementation
The initial implementation accepts input files containing multidimensional points (Float[])
that are commaterminated values enclosed in brackets (e.g. "\[1.5,2.5,\]"). Processing is
done in two phases: Canopy generation and Clustering.
h3. Canopy generation phase
During the map step, each mapper processes a subset of the total points and applies the chosen
distance measure and thresholds to generate canopies. In the mapper, each point which is found
to be within an existing canopy will be output using that canopy's id to a combiner. After
sorting by canopyId keys has occurred, the combiner will see an iterator of all points for
each canopyId key. The combiner sums all of the points having that key and normalizes the
total to produce a canopy centroid which is output, using a constant key ("centroid") to a
single reducer. The reducer receives all of the initial centroids and again applies the canopy
measure and thresholds to produce a final set of canopy centroids which is output (i.e. clustering
the cluster centroids). The reducer output format is: canopyId\t\[<canopycentroidcoordinates>\].
h3. Clustering phase
During the clustering phase, each mapper reads the canopy centroids produced by the first
phase. Since all mappers have the same canopy definitions, their outputs will be combined
during the shuffle so that each reducer (many are allowed here) will see all of the points
assigned to one or more canopies. The output format will then be: <canopydefinition>\t\[<memberpointcoordinates>\]
<payload>. My plan is to include the canopyId, measure, thresholds and centroid in the
<canopydefinition> so that the output will be selfdescriptive. The plan is also to
allow any information encoded in the input points after the coordinate delimiter '\]' to be
treated as payload and passed through the clustering phase without modification.

CONFLUENCE INFORMATION
This message is automatically generated by Confluence
Unsubscribe or edit your notifications preferences
http://cwiki.apache.org/confluence/users/viewnotifications.action
If you think it was sent incorrectly contact one of the administrators
http://cwiki.apache.org/confluence/administrators.action
If you want more information on Confluence, or have a bug to report see
http://www.atlassian.com/software/confluence
