# What is exactly the kmeans ++ algorithm? How do you actually go about implementing it?

6 views (last 30 days)
Tim Robbins on 13 Oct 2015
Answered: Tom Lane on 24 Oct 2015
In the kmeans algorithm, after selecting the initial centroid, how do you select a second cluster from then on? What does it actually mean? I've been through the original paper, the documentation on matlab but nobody seems to go into detail about this topic at all. Please help, I'm going nuts, please explain as you would to someone of barely any knowledge of matlab.

Walter Roberson on 13 Oct 2015
Edited: Walter Roberson on 14 Oct 2015
Tim Robbins on 14 Oct 2015
"Choose one new data point at random as a new center, using a weighted probability distribution where a point x is chosen with probability proportional to D(x)2. "
Please Explain this in a slow easy to understand manner, I read this before and this is the step I cant seem to understand at all. In some places its called select according to the probability, in others its called sampled according to the probability.
Till now my understanding is this, choose initial center at random, then what? Do I choose another one at random and then find its probability, do I repeat it for all the points and find the probability at each step for every point?
How does one go choosing the probability? Does the probability have to be more or less, please help I'm desperate. There are literally 3 places on the web mentioning this and others are just quoting this over and over again, they are :- The page on wikipedia, The original paper and Matlab's documentation.
Other than that I cant seem to find a book, or any other paper which goes into detail about this.
Even the original paper is just 5 lines of the algo and the rest of the paper goes to prove how km++ is better and efficient than anything around (all the other techniques).
Please take this into account and tell me what is your interpretation of the above statement.

Tom Lane on 24 Oct 2015
First off, if you look inside kmeans.m you should find an implementation of this.
I'm hoping this is the kind of information you want, based on your other comments. Pick a point at random as the first cluster center. The general idea is that you want other cluster centers to be far from the first one. Suppose the D^2 distances from the other points to the first center are in descending order 12, 10, 2, 1.1, 0.5, ..., and suppose the sum of these is 30. Then you would choose as the second cluster the first point with probability 12/30, the second with probability 10/30, and so on.
For the third center, proceed the same way but D^2 for any point is the minimum distance from that point to the first two centers. Then choose centers 4,5,...,k the same way,
Generally this just starts you out. You proceed with the k-means algorithm iterations once you select the k starting centers.