K-Means has a few problems however. The first is that it isn’t a clustering algorithm, it is a partitioning algorithm. That is to say K-means doesn’t ‘find clusters’ it partitions your dataset into as many (assumed to be globular) chunks as you ask for by attempting to minimize intra-partition distances. --- Comparing Python Clustering Algorithms — hdbscan 0.8.1 documentation There are several problems with K-Means: first, it is a partitioning algorithm, not a clustering algorithm. That is, rather than "finding clusters," K-means attempts to minimize the distances within partitions, dividing the dataset into the required number of (assumed to be spherical) chunks.
Okay, this is not the most common wording, but it's a nice distinction.
This page is auto-translated from /nishio/クラスタリングとパーティショニング using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.