打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
k-medoids
转 http://en.wikipedia.org/wiki/K-medoids
The k-medoids algorithm is aclusteringalgorithm related to thek-means algorithm and the medoidshift algorithm. Both the k-means and k-medoids algorithms are partitional (breaking the dataset up into groups) and both attempt to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster. In contrast to the k-means algorithm, k-medoids chooses datapoints as centers (medoids or exemplars) and works with an arbitrary matrix of distances between datapoints instead of  
. This method was proposed in 1987 for the work with 
 norm and other distances.
k-medoid is a classical partitioning technique of clustering that clusters the data set of n objects into k clusters known a priori. A useful tool for determining k is thesilhouette.
It is more robust to noise and outliers as compared tok-means because it minimizes a sum of pairwise dissimilarities instead of a sum of squared Euclidean distances.
Amedoid can be defined as the object of a cluster, whose average dissimilarity to all the objects in the cluster is minimal i.e. it is a most centrally located point in the cluster.
The most common realisation of k-medoid clustering is the Partitioning Around Medoids (PAM) algorithm and is as follows:
Initialize: randomly select k of the n data points as the medoids
Associate each data point to the closest medoid. ("closest" here is defined using any validdistance metric, most commonlyEuclidean distance,Manhattan distance orMinkowski distance)
For each medoid mFor each non-medoid data point oSwap m and o and compute the total cost of the configuration
Select the configuration with the lowest cost.
repeat steps 2 to 4 until there is no change in the medoid.
Contents
[]
Demonstration of PAM[edit]
Cluster the following data set of ten objects into two clusters i.e. k = 2.
Consider a data set of ten objects as follows:
Figure 1.1 – distribution of the data
X126
X234
X338
X447
X562
X664
X773
X874
X985
X1076
Step 1[edit]
Figure 1.2 – clusters after step 1
Initialize k centers.
Let us assume c1 = (3,4) and c2 = (7,4)
So here c1 and c2 are selected as medoids.
Calculate distance so as to associate each data object to its nearest medoid. Cost is calculated usingManhattan distance (Minkowski distance metric with r = 1). Costs to the nearest medoid are shown bold in the table.
ic1Data objects (Xi)Cost (distance)
134263
334384
434474
534625
634643
734735
934856
1034766
ic2Data objects (Xi)Cost (distance)
174267
374388
474476
574623
674641
774731
974852
1074762
Then the clusters become:
Cluster1 = {(3,4)(2,6)(3,8)(4,7)}
Cluster2 = {(7,4)(6,2)(6,4)(7,3)(8,5)(7,6)}
Since the points (2,6) (3,8) and (4,7) are closer to c1 hence they form one cluster whilst remaining points form another cluster.
So the total cost involved is 20.
Where cost between any two points is found using formula
where x is any data object, c is the medoid, and d is the dimension of the object which in this case is 2.
Total cost is the summation of the cost of data object from its medoid in its cluster so here:
Step 2[edit]
Figure 1.3 – clusters after step 2
Select one of the nonmedoids O′
Let us assume O′ = (7,3)
So now the medoids are c1(3,4) and O′(7,3)
If c1 and O′ are new medoids, calculate the total cost involved
By using the formula in the step 1
ic1Data objects (Xi)Cost (distance)
134263
334384
434474
534625
634643
734744
934856
1034764
iO′Data objects (Xi)Cost (distance)
173268
373389
473477
573622
673642
773741
973853
1073763
Figure 2. K-medoids versus k-means. Figs 2.1a-2.1f present a typical example of the k-means convergence to a local minimum. This result of k-means clustering contradicts the obvious cluster structure of data set. In this example, k-medoids algorithm (Figs 2.2a-2.2h) with the same initial position of medoids (Fig. 2.2a) converges to the obvious cluster structure. The small circles are data points, the four ray stars are centroids (means), the nine ray stars are medoids.
So cost of swapping medoid from c2 to O′ is
=== So moving to O′ would be a bad idea, so the previous choice was good. So we try other nonmedoids and found that our first choice was the best. So the configuration does not change and algorithm terminates here (i.e. there is no change in the medoids).
It may happen some data points may shift from one cluster to another cluster depending upon their closeness to medoid.
In some standard situations, k-medoids demonstrate better performance than k-means. An example is presented in Fig. 2. The most time-consuming part of the k-medoids algorithm is the calculation of the distances between objects. If a quadratic preprocessing and storage is applicable, the distances matrix can be precomputed to achieve consequent speed-up. See for example, where the authors also introduce a heuristic to choose the initial k medoids. A comparative study of K-means and k-medoids algorithms was performed for normal and for uniform distributions of data points. It was demonstrated that in the asymptotic of large data sets the k-medoids algorithm takes less time.
See also[edit]
cluster analysis
k-means
k-medians
medoid
silhouette
Software[edit]
ELKI includes several k-means variants, including K-medoids and PAM.
GNU R includes in the "flexclust" package variants of k-means and in the "cluster" package.
External links[edit]
E.M. Mirkes,K-means and K-medoids (Applet),University of Leicester, 2011.
References[edit]
Kaufman, L. and Rousseeuw, P.J. (1987), Clustering by means of Medoids, in Statistical Data Analysis Based on the
–Norm and Related Methods, edited by Y. Dodge, North-Holland, 405–416.
Sergios Theodoridis & Konstantinos Koutroumbas (2006). Pattern Recognition 3rd ed. p. 635.
The illustration was prepared with the Java applet, E.M. Mirkes,K-means and K-medoids: applet. University of Leicester, 2011.
H.S. Park , C.H. Jun, A simple and fast algorithm for K-medoids clustering, Expert Systems with Applications, 36, (2) (2009), 3336–3341.
T. Velmurugan and T. Santhanam, Computational Complexity between K-Means and K-Medoids Clustering Algorithms for Normal and Uniform Distributions of Data Points, Journal of Computer Science 6 (3) (2010), 363-368.
本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
数据挖掘聚类算法之K
聚类知识(Clustering)
EM for Gaussian mixtures
ML之KMeans:利用KMeans算法对Boston房价数据集(两特征+归一化)进行二聚类分析
供应链选址(1)-基于自定义距离的广义Kmeans 聚类
自然环境中鲜食葡萄快速识别与采摘点自动定位方法
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服