This tutorial was developed by the authors for the Johanna Mestorf Academy and the Institute of Pre- and Protohistoric Archaeology of Kiel University for the use in the teaching of quantitative archaeology. On content level the ISAAK team supported the tutorial with numerous ideas and inputs. The development of this tutorials was funded by PerLe-Fonds für Lehrinnovation.
In archaeology as well as in other disciplines researchers are looking for groups in their data. These group might be interpreted e.g. as different populations, different communcation spaces or subgroups of a larger population, depending on the research question, theoretical framework and data used. Classification is central to archaeological typology and the definition of archaeological ‘cultures’ and we believe this to be one of the most basic and important methods in archaeology.
In this tutorial we will guide you through the basic concepts behind classification. We will introduce you to several ways to measure distances and explain why this matters and what is appropriate for what kind of data. Afterwards we will focus on two cluster methods, hierarchical Density Based Clustering of Applications with Noise (hDBscan) and k-means, which is one of the best-known clustering algorithms in archaeology. At the end we will show you how to validate your results.
We will teach you how to analyse your data in R. Our examples will be done on real archaeological data sets from the package archdata
(Carlson et al. (2018)).
It is recommended that you have at least a little bit of knowledge of R beforehand. If not, this tutorial will be helpful to you nonetheless, if you want to learn about clustering methods, as we explain the algorithms used in detail. After understanding this, you could take another program to run your analyses.
To understand how we do classifications, a few definitions should be mentioned:
A ‘set’ is the subsumption ‘M’ of several separately defineable objects (of our perception or imagination) in a whole (freely translated Cantor 1895).
(“Unter einer ‘Menge’ verstehen wir jede Zusammenfassung M von bestimmten wohlunterschiedenen Objekten unserer Anschauung oder unseres Denkens zu einem Ganzen”) (Cantor 1895)
Classification means creating several groups inside the set by the attributes of their elements.
To classify is the act of allocating elements into the groups.
Archaeological types are interpreted groups.
Diagnostic attributes are used to determine which element belongs into which group. They might be univariate – meaning one single attribute is enough to define one group and every object that has this attribute will belong into this group – or they might be multivariate. Multivariate means that several attributes are needed to assign an object to a group.
Sometimes, several attributes correlate with one latent attribute. This might lead to this latent variable being overly weighted and therefore bias the end result. Therefore we need to make sure that no correlation between different attributes exist. Only then we can use them for multivariate cluster analyses!
Thinking about the data, there is the question of how many different values might an attribute take on. So, for example, in old-school questionnaires there were only two distinct possibilities to answer the question of your gender, it was a binary attribute. It’s easy to see that other attributes may have many more “answers” to their question.
Now, are these values disjointed? And are the elements or objects in the groups disjointed? What does this mean?
If both, elements and attributes, are disjointed, then there is no overlap at all between the groups. It means, there is not one object, that has an attribute from another group. In the figure it is the upper left picture.
If the attributes are not disjointed, but the objects are, then objects might have attributes, that are used to define other groups. So they might be in several groups. In the figure it is the upper right case: Object C has attributes a, b, c and d. c and d are the sole attributes of D.
Should attributes be disjointed, but objects aren’t, then there might be objects in different groups that share attributes. Look in the figure at the lower left case: The attribute c belongs to the objects A, B, C and D.
The last case is, if neither objects nor attributes are disjointed This means a clear overlap of the groups. Some elements and some attributes could belong in several groups. It is illustrated in the figure in the lower right hand corner.
If we define groups, we need to be clear how we want to deal with gaps. Do we use a monothetic or a polythetic classification? A monothetic classification decrees that every attribute assigned to a group needs to be observed on the object in question for it to be put inside this group. A polythetic classification takes “gaps” in the attributes of an object not as seriously. An object will be assigned to the group, even if some attributes might be missing.
What do you think, what are most archaeological classification systems like?
Rank allocation is another topic to think about. Will you classify in a monohierarchical or a polyhierarchical way?
If you choose a monohierarchical system, every group may only have one parent group. A polyhierarchical system allows several parents for one subgroup. This is, for example, implemented in the CIDOC CRM System (if you are interested in database design, check it out).
Allocation definition describes, whether we want distinct groups or fuzzy groups. Fuzzyness gives degrees of probabilty an element belongs to a certain group, whereas distinct groups are defined with surety (0 or 1).
Synthetic Classifications systems define possible groups beforehand. They are quite often used in libraries, where a system is established and books are slotted into this system.
If different kinds of attributes are treated as facets, we talk about faceted classification. Here all facets have equal weight, but inside the facet, hierarchies can be used. The different values the facets can have are called foci.
In the example, there are colours and shapes of the objects to take into consideration for the classification. These are the facets, they rank equally. To classify the objects one can either start with the colour or with the shape.
Many different methods of numerical classification exist, such as hierarchical classification methods, density based approaches, flat classifications etc. They aim at finding natural groups, usually on the basis of a distance matrix in the feature space. We call them cluster analyses if they focus on grouping objects.
Hierarchical classification methods are based on measures of proximity or distance. With these measures we define how we distinguish points close to each other from those further apart. We will talk about them in detail further on.
If one is using the agglomerative approach it means that at first every point is a cluster by itself. These are then “fused” together in several steps. There are different methods of fusing the clusters to each other. In the figure, several are illustrated:
Imagine you have the points a to d and we use the euclidean distance between them to determine, which points are close to each other. Single linkage simply takes the shortest distances of the points to each other and links first the two points closest to each other (d and c), then the second smallest distance between points a and b leads to them forming a cluster. Now the two clusters are linked by the distance between c and a, as this is “the shortest way” between both clusters. After these two points are linked, now all points belong to one cluster. This is the easiest way to link points to each other and it is used e.g. in hDBscan.
The second version is complete linkage: Here the definition of “shortest distance” between two clusters is different. The first step of connecting every point to it’s neares neighbor remains as in single linkage. But to merge these clusters, the two elements in two clusters that are furthest away from each other are considered. If this distance is still the smallest now available, the clusters get connected. In the figure you can see how b and c lead to the merging of the two clusters, though the distance between them is larger than the one between a and c.
Average linkage takes the average of the distances of all members of one group to all members of the other group. The smallest average leads to a linking of these two groups.
The centroid method takes the centroid of each cluster and calculates the distances between those. Again, those clusters that are closed by this measure of distance gets connected.
Next to the illustrations of the points and how they are linked you can always see the hierarchical dendrogramm of this analysis. It shows by the length of the “stalks” how “far” the points that get connected are apart. c and d are always very close to each other, whereas a and b are further apart. The lines that connect the cluster (ab) and the cluster (cd) have a differing length depending on the fusion method used. As you can see the centroid method leads to a “weird” line. Here the distance of the centroid 3 to centroid 1 and 2 is smaller than the distance of the two points in cluster (ab).
There are a lot of different algorithms for clustering and they will lead to different outputs. Each methods should be used with a specific aim in mind.
In this tutorial we will talk about a hierarchical grouping method with single linkage, and an added flat cluster extraction (hDBscan) and the partitioning clustering method k-means. Both can be used for the delimination of groups of objects.
At the end of an analysis it is important to have a look how valid your results are. There’s internal validation: Are the structures you found really in your data? How compact and how separated are the clusters? How sure can you be the resulting clusters fit your data best? This can be done with the silhouette method we will discuss in this tutorial.
External validation asks how well these results fit the real world data. Here it is a good idea to go back to the dataset and think critically about whether your results are feasible and explainable.
Validation of method: This means you should check again that the method used is appropriate for the question you ask.
Distance measurement allows identification of the underlying principles in multivariate datasets. This tutorial shows how to generate distance measurement for archaeological datasets. The problem is to select the algorithm suitable for the type of data used in the investigation. This tutorial focuses specifically on binary and nominal data.
Four different methods of distance calculations are used in this tutorial:
The archdata
package provides the example for conducting the analysis (Carlson 2017). The example dataset for binary data is “Michelsberg”, for categorical data we use “DartPoints”. The distance matrix calculation for simple matching, Jaccard coefficient and Ochiai is provided by ade4
(Dray 2020). The hamming distance matrix is calculated using the FD
package (Laliberté 2015).
Binary data consits of variables that only have two possible values, as e.g. presencec or absence, TRUE or FALSE, 0 or 1. In this tutorial we will code presences as 1s and absences as 0s.
For binary data, disimilarity (D) and similarity (S; D = 1-S) are measured based on the counts of four cases:
Let us take the example of two objects for which the presence/absence of six variables was observed:
Variable 1 | Variable 2 | Variable 3 | Variable 4 | Variable 5 | Variable 6 | |
---|---|---|---|---|---|---|
Object 1 | 1 | 1 | 0 | 1 | 0 | 0 |
Object 2 | 1 | 0 | 1 | 0 | 0 | 0 |
For this example,
The equations presented in the distance measures discussed below draw upon these four cases.
We are using the Michelsberg
dataset from the package archdata
. This dataset includes counts of 39 vessel types from 109 archaeological features belonging to 69 sites of the Central European Younger Neolithic Michelsberg Culture (MBK; 4350-3500 BC) and one site of the Funnel Beaker Culture (TBK 4300 - 2800 BC). Additional information includes the Lüning (Lüning (1967)) phase association of each assemblage along with its XY coordinates (UTM WGS 84 Zone 32N), Site Name, Catalogue Number (Höhn (2002)), and Feature Number.
library(archdata)
data("Michelsberg")
Your first step to distance measuring, is ensuring that you have the right dataset. The Michelsberg dataframe used in this tutorial offers abundance variables as well as nominal and ordinal variables. However, we only wish to measure the distance between objects on the basis of the presence and/or absence of types.
As we create a new dataframe, we will only select abundance variables.
newdf <- Michelsberg[, 5:39]
Then, we need to transform the count values into a presence value : 1.
newdf[newdf > 0] <- 1
The dataframe is now ready to be tested.
The simple matching coefficient is used to measure dissimilarity when the dataset is symmetrical. For symmetrical binary data both “0” and “1” represent meaningful classifications. Historically the most commonly cited example is gender as male and female. In archaeological research such datasets are uncommon, due to the higher number of variables encountered in research (we may classify gender as male or female, but always have to add the category “not identifiable”). Hence in this tutorial we include the necessary steps for analyzing such datasets but will not perform the analysis on an example dataset. Should you know or use symmetrical binary data, we would love to hear back from you so we could modify the analysis.
The formula for simple matching Distance of symmetrical binary data is as follows:
\[ SMC = \frac{a+d}{a+b+c+d} \]
This can be transfered to a distance measure by substracting it from 1.
\[ D_{SMC} = 1 - \frac{a+d}{a+b+c+d} \]
It does not apply weight to samples.
To apply the SM distance, we use the package “ade4” and select SM by passing 2 to the method argument. Please note that function produces the square root of SM distance.
library(ade4)
dist.binary(newdf, method = 2)
The Jaccard similarity coefficient can be used for binary datasets, based on the number of shared presences between two objects. Considering : - a, the number of shared variables by 2 objects - b, the number of variables owned by 1st object, but not shared with the 2nd - c, the number of variables owned by 2nd object, but not shared with the 1st
then \(S = \frac{a}{(a + b + c)}\) or \(D = 1 - (\frac{a}{(a + b + c)})\)
Please note, that shared absences (O’s resp. quantity d) are omitted, that is, the Jaccard distance is an asymmetric binary distance measure.
To apply the Jaccard, we pass 1 to the method argument of ade4 package :
dist.binary(newdf, method = 1)
## achenheim_1.1 achenheim_1.2 didenheim_2 entzheim_3
## achenheim_1.1 0.0000000 1.0000000 1.0000000 0.8164966
## achenheim_1.2 1.0000000 0.0000000 0.9128709 1.0000000
## didenheim_2 1.0000000 0.9128709 0.0000000 1.0000000
## entzheim_3 0.8164966 1.0000000 1.0000000 0.0000000
Please note that function ade4::dist.bin()
produces the square root of Jaccard distance.
In comparison with the Jaccard index, the Ochiai weights the number of shared presences (quantity a) based on their amount of overlap. Overlap means it looks also at how large the data sets actually are: This weighting is implemented by scaling with the geometric mean of the quantities a+b and a+c
Thus,
\[ D = 1 - \frac{a} {\sqrt{(a+b)*(a+c)}} \]
To apply Ochiai, we pass 7 to the method argument of ade4 package :
dist.binary(newdf, method = 7)
## achenheim_1.1 achenheim_1.2 didenheim_2 entzheim_3
## achenheim_1.1 0.0000000 1.0000000 1.0000000 0.7071068
## achenheim_1.2 1.0000000 0.0000000 0.8434008 1.0000000
## didenheim_2 1.0000000 0.8434008 0.0000000 1.0000000
## entzheim_3 0.7071068 1.0000000 1.0000000 0.0000000
Please note that function again produces the square root of the original distance.
Categorical data means that not just “0”s and “1”s are possible, but describing categories of data, which are not numerical and can not be ranked. Examples would be colour described by words (“red”, “blue”, “orange”) or - very archaeologically - types of pottery.
Because the Michelsberg
data is binary, we have to use another data collection as example.As categorical archaeological data we will use the dataset “DartPoints” from the archdata package (Carlson et al. (2018)). The dataset consists of metrical and categorical measurements on 91 Archaic dart points recovered during surface surveys at Fort Hood, Texas (Carlson et al. (1987)). They represent five types. As we want to show the analysis of categorical data, we will shorten this dataset to consist only of the categorical variables.
data("DartPoints")
DP <- DartPoints[, c(1, 12:17)]
The Hamming distance is more or less a “simple matching algorithm” for categorical data.
Using Hamming distance calculates the similarity to be 1 ($S_k = 1 $) if the values in the column k are the same, if they are not, the value is 0. The sum of these values is divided by the number of columns that are considered. This way the sum gets normalized:
\(S_{total} = \frac{\sum S_k} {n\ of\ columns}\)
The Hamming distance then is, as usual, 1 minus the similarity value \(S_total\). \(D = 1 – S_{total}\)
There are several packages in R, which calculate this distance measure. We suggest to use the package FD
and the function called gowdis()
. It implements the Gower Distance measure, which is applicable to several data types at the same time. For categorical / qualitative descriptors it uses the described Hamming distance.
In it a missing value automatically changes the weight \(w_j\) to 0 (Legendre et al. 2012, 280), which means that the similarity of two objects that do not share a feature is, for this column, 0.
You apply the Hamming distance using this code:
library(FD)
DP_h <- gowdis(DP)
To look at the distance matrix we convert it into a matrix and call upon the first 4 rows and columns.
DP_ham <- as.matrix(DP_h)
DP_ham[1:4, 1:4]
## 1 2 3 4
## 1 0 0 0 0
## 2 0 0 0 0
## 3 0 0 0 0
## 4 0 0 0 0
Well done! You have now learned about different distance matrices. It is important to think about what kind of data you have (binary or categorical) and how you want to weight the co-occurence of absence of a type. Is this important in your case? Or is it rather because your dataset may be incomplete?
Think about these topics before using distance matrices and cluster analysis methods!
Thearchdata
(Carlson et al. 2018) package provides the example for conducting the analysis. The “Michelsberg” dataset is used for both spatial analysis and testing of binary datasets. Other distance matrices were tested using the “Dart Points” datasets. We will need a Correspondence Analysis (CA) plot, using vegan
(Oksanen et al. 2019). hDBscan
is performed through the package DBscan
(Hahsler et al. 2019). The distance matrix calculation for simple matching, Jaccard coefficient and Ochiai is provided by ade4
(Dray 2020). The Gower dissimilarity matrix is calculated using the FD
package (Laliberté 2015). Also FactoMineR, factoextra, expss (as.dichotomy)
Many archaeologists are interested in using cluster analysis to analyze their datasets. However, many of the clustering algorithms look for only certain “shapes” of clusters, such as circles, and encounter trouble when there are points that fall between clusters (noise). This tutorial shows how to use hDBscan (hierarchical density based clustering of applications with noise), an algorithm which attempts to deal with these concerns and, furthermore, tries to provide a means for determining how many clusters are appropriate for a data set.
The following part of the tutorial is broken into two sections:
hDBscan
with spatial datahDBscan
based on different distance matrices (binary and categorical data).We use spatial data as the first implementation of the clustering method, because we can imagine the process more easily.
2D maps are a constant in archaeological research, archaeologists have been working with e.g. distribution maps for a long time. Therefore spatial analysis is one of the essential fields for testing and implementing statistical methods, especially for the identification of patterns and relationships between objects in space. The cluster analysis is applied to classify data points based on their position in space.
The method used in the spatial analysis is the Hierarchical Density-Based Spatial Clustering of Applications with Noise (hdbscan) provided by the package dbscan
(Hahsler et al. 2019).
We are using the Michelsberg
dataset from the package archdata
(Carlson et al. 2018). This dataset includes counts of 39 vessel types from 109 archaeological features belonging to 69 sites of the Central European Younger Neolithic Michelsberg Culture (MBK; 4350-3500 BC) and one site of the Funnel Beaker Culture (TBK 4300 - 2800 BC). For this spatial clustering, we only use the XY coordinates in UTM WGS 84 Zone 32N (Höhn (2002)), and delete the duplicates.
library(archdata)
data("Michelsberg")
Once you have loaded the dataset, create a new one using only coordinate variables.
xy <- as.matrix(Michelsberg[, 41:42])
The unique() function removes duplicated elements/rows from a vector, data frame or array, so we use it on these data points as well.
xy <- unique(xy)
We will plot these in a very simple way to show how they scatter in space:
plot(xy)
hDBscan is a clustering algorithm based on single-linkage. Single-linkage has the problem that two discrete clusters might be connected by a noise point and recorded as one cluster. To combat this hDBscan transforms space in such a way that disperse points are made even more distant to others, so that they don’t “accidentally” connect to denser areas, that we want to be recognised as discrete clusters. Inside the denser areas though, the original distances are kept. For this, a core distance is calculated. It it the k-nearest-neighbour distance. For each point the distance to the next k points is being measured and the longest distance recorded. It is the radius needed to reach all k points: \(core_k(x)\).
You can see that those points in less dense areas the core distance is larger than for thise in denser areas. We now define a new value: The “mutual reachability distance” between two points a and b is the largest of the three values euclidean distance between the two points, the core distance of point a or the core distance of point b:
\(d_{mreach-k}(a, b) = max(core_k(a), core_k(b), d(a,b))\)
You can see, that the euclidean distance between the blue and green point may be larger than the core distance of the blue point, but smaller than the core distance of the green point. Therefore the core distance of the green point is now the mutual reachability distance between green and blue.
This is done for all points to each other, therefore we have a distance matrix with new values for mutual reachibility.
We don’t need to give k (amount of points to be reached by the core distance), but hDBscan takes all possible core distances and creates a hierarchical tree out of this.
To get there we now need to create a spanning tree, to be precise, a minimal spanning tree.
Conceptually we create a graph from the distance matrix and all points are connected to each other with a weight to the edges that corresponds to the mutual reachibility distance between the two points. But this would mean creating \(n^2\) edges, which means a lot and this would be very computationally intensive. Therefore a minimal spanning tree is constructed, which deletes all superfluous edges. The Algorithm of Jarvín, Prim und Dijkstra (developed in graph theory) uses a threshold which is continually lowered to go through the edges. With each step the edges that have weights larger than the threshold get taken out of the graph. This way a network of points is created, that connect to each other with differing strength.
To get a cluster hierarchy out of this spanning tree, the edges are ordered by distance and the points pulled together to clusters with the help of a union Find Structure. A dendrogramm is created by this:
library(dbscan)
MB <- hdbscan(xy, minPts = 5)
plot(MB$hc, main = "HDBSCAN* Hierarchy")
This is the kind of cluster hierarchy that is created hierarchical cluster algorithms. But we want to find out, which clusters are the “best” ones.
We now look at this hierarchical tree differently: It is one big cluster that looses points. Most often it looses only one point at a time, sometimes it looses several at once. But are these a cluster by themselves or is ist still just one cluster that got a bit smaller?
How many points need to split of at the same time to be a single cluster for themselves is decided by the user. This is the “minPts” argument to be given to the hdbscan function. Now the algorithm runs through the hierarchy tree and decides at each split, if the amount of points splitting of is larger oder smaller than then minPts argument. If it is smaller, the cluster retains its identity and just looses points. If there are enough points leaving the cluster, they form a new cluster by themselves. This way two child clusters are formed. This way we can create a reduced cluster tree:
plot(MB, gradient = c("purple", "blue", "green", "yellow"), scale = 1.5)
In this reduced cluster tree the width of a line shows how many points are in this cluster (cluster size). So it is easy to see how the points “fall out of the cluster” where the line width is reduced. This is easier to read than the hierarchical tree before, but we still don’t know which cluster to choose.
Intuititvely we want to choose those clusters that “live longest”. Short-lived clusters may simply be created by the single-linkage approach. Thinking about the last figure we want those clusters that have the “largest area of ink” AND we do not want to choose a child of an already chosen parent. To give this a mathematical spin we calculate the following values:
\(\lambda = \frac{1}{\mathrm{distance}}\)
\(\lambda_{\mathrm{birth}}\) and \(\lambda_{\mathrm{death}}\) are start and end points of each cluster
\(\lambda_p\) is the lambda where each point leaves its cluster. It is somewhere between the {} and {} of the cluster.
Now we can define the stability of each cluster as the sum of all differences between the lambda-value of each point leaving the cluster and the lambda-value of the birth of the cluster:
\(\sum_{p \in \mathrm{cluster}} (\lambda_p - \lambda_{\mathrm{birth}})\)
To choose the stablest clusters, we start with the end points of the clusters and go bottom up: Is the sum of the stability value of the child clusters larger than the stability of the parent cluster, we keep the child clusters as chosen clusters and give their summed value to the parent cluster to be compared at the next level. If the stability of the parent cluster is higher than the sum of its childrens’ stability, we discard the children and choose the parent cluster. If we reach the top of the hierarchy tree, our chosen clusters should be the most stable ones in the system.
We can show them here:
plot(MB, gradient = c("purple", "blue", "green", "yellow"), show_flat = T)
Surprising isn’t it? But clusters 2,3,4 and 5 together are more stable than “the big green one”, therefore they are chosen.
We now can use every point lambda value (\(\lambda_p\)). We can normalise their values for each cluster, so that they range from 0 to 1. These values can be used as measure of the strength of cluster membership. The higher the \(\lambda_p\) the later it “leaves” its cluster. hDBscan saves these values as “probabilities”-attribute and we can plot our data with this information (with the alpha value) and their cluster membership (color):
colors <- mapply(function(col, i) adjustcolor(col, alpha.f = MB$membership_prob[i]),
palette()[MB$cluster + 1], seq_along(MB$cluster))
plot(xy)
points(xy, col = colors, pch = 20)
As you can see, the points closer to each other are colored “more strongly”, these are the ones, where their probability to belong to the cluster is stronget. A few points are not coloured in, they are considered noise. As you can see, they are more scattered and only “pair up” with one other. As we decided to have at least 5 points in one cluster (see above!), this is not surprising.
Try to change this minPts
parameter and see what happens to the plot!
In the previous part of this tutorial, we have seen how to apply hDBscan clustering techniques on bivariate data, in this case spatial coordinates (X, Y). However, archaeologists usually want to look for clusters in data including more than just 2 variables. Looking at more than two variables at the same time is a multivariate approach. We now explore how using different distance matrices for binary and categorical data influences hDBscan clustering.
For our binary data example, we will be using the dataset “Michelsberg” from the package archdata
(Carlson et al. 2018).
library(archdata)
data("Michelsberg")
Once you’ve loaded the dataset, select only the columns of vessel types and reclassify counts into presence/absence:
newdf <- Michelsberg[, 5:39]
newdf[newdf > 0] <- 1
In order to examine the results of the hDBscan, we need a plot on which to color the clusters. For this multivariate dataset, a Correspondence Analysis makes the most sense. Let’s create that plot now using the cca
function (canonical correspondence analysis) of the package vegan
:
library(vegan)
newdfcca <- cca(newdf)
newdfcca
## Call: cca(X = newdf)
##
## Inertia Rank
## Total 4.12
## Unconstrained 4.12 34
## Inertia is scaled Chi-square
##
## Eigenvalues for unconstrained axes:
## CA1 CA2 CA3 CA4 CA5 CA6 CA7 CA8
## 0.7767 0.4009 0.2646 0.2235 0.1971 0.1887 0.1688 0.1645
## (Showing 8 of 34 unconstrained eigenvalues)
plot(newdfcca, main = "Canonical CA of Michelsberg Ceramic Type data")
With a plot on which we can display the results of the clustering analysis created, we can now extract the xy coordinates of the CCA for easy later plotting
ccaxy <- scores(newdfcca, display = "si")
and move on to applying the different distance measures.
The first distance matrix we will use is simple matching distances. The prerequisite for using simple matching distances is using a symmetrical binary dataset. Since such datasets are uncommon in archaeological reserach the tutorial will focus on the code used to run the analysis without discussing the details. Should you know or use symmetrical binary data, we would love to hear back from you so we could modify the analysis. The only difference in comparison to the spatial analysis presented in the first part of this tutorial is that we will submit the distance matrix directly into hdbscan
rather than the raw data and letting hdbscan
use its own distance matrix. Calculating simple matching distances is discussed in another part of the tutorial, and thus will not be described in detail again here.
With this code we can create the simple matching distances
library(ade4)
smnewdf <- dist.binary(newdf, method = 2)
We can now submit smnewdf
to the hdbscan
algorithm.
library(dbscan)
smhdb <- hdbscan(smnewdf, minPts = 3)
smhdb
## HDBSCAN clustering for 109 objects.
## Parameters: minPts = 3
## The clustering contains 2 cluster(s) and 83 noise points.
##
## 0 1 2
## 83 4 22
##
## Available fields: cluster, minPts, cluster_scores, membership_prob,
## outlier_scores, hc
As you can see, using this distance matrix, only 2 clusters are uncovered and the vast majority of the data (83/109 points) fall into the category “noise”.
First, let us look at the reduced hierarchy:
plot(smhdb, show_flat = TRUE)
Here is the projection of the clustering on the CCA:
plot(ccaxy, col = smhdb$cluster + 1, pch = 21)
This clustering does not seem to make much sense, due to the wrong dataset.
The second of the binary distance measures we will use is the Jaccard coefficent. As the calculation of this matrix is discussed above, it will not be described in detail again here.
With this code, we can create the Jaccard matrix:
jnewdf <- dist.binary(newdf, method = 1)
This matrix, jnewdf
, can then be fed into the hdbscan
algorithm:
jhdb <- hdbscan(jnewdf, minPts = 3)
jhdb
## HDBSCAN clustering for 109 objects.
## Parameters: minPts = 3
## The clustering contains 3 cluster(s) and 43 noise points.
##
## 0 1 2 3
## 43 25 7 34
##
## Available fields: cluster, minPts, cluster_scores, membership_prob,
## outlier_scores, hc
As you can see, using this distance matrix we get three clusters and only 43 noise points, already a result much more encouraging than for the simple matching distance.
These clusters can be seen in the reduced hierarchy plot:
plot(jhdb, show_flat = TRUE)
and projected onto our CCA:
plot(ccaxy, col = jhdb$cluster + 1, pch = 21, main = "CCA Jaccard Clusters")
This clustering seems to make much more sense - both the cluster in the upper right hand corner as well as the more linear one in the left hand side of the graph have been marked. Some of the overlapping points which are not marked as part of clusters are likely the result of the same issue described above for the simple matching distance: our CCA plot is only displaying variability along 2 axes, much variability could lie in the other dimensions of our dataset.
Another distance matrix may be calculated using the Ochiai index, implemented in the package ade4
. Calculating Ochiai indices is discussed in the tutorial above, and thus will not be described in detail again here.
With this code we can create the Ochiai distance:
library(dbscan)
library(ade4)
ochnewdf <- dist.binary(newdf, method = 7)
We now can use the distance matrix ochnewdf
and put it to the hdbscan
algorithm. We choose the minimal Cluster size (minPts) to be 3:
ochhdb <- hdbscan(ochnewdf, minPts = 3)
ochhdb
## HDBSCAN clustering for 109 objects.
## Parameters: minPts = 3
## The clustering contains 4 cluster(s) and 34 noise points.
##
## 0 1 2 3 4
## 34 34 9 3 29
##
## Available fields: cluster, minPts, cluster_scores, membership_prob,
## outlier_scores, hc
The reduced hierarchy and which clusters hDBscan exracts from it:
plot(ochhdb, show_flat = TRUE)
This plot shows the projection of the clustering on the CCA:
ccaxy <- scores(newdfcca, display = "si")
plot(ccaxy, col = ochhdb$cluster + 1, pch = 21, main = "CCA Ochiai Clusters")
There are two distinct clusters derived by the hDBscan algorithm, which are denoted in the reduced hierarchical cluster tree as clusters Nr. 1 and 4. This distinction is clear in the CCA as well (cyan and red). The two smaller and more closely connected cluster 2 and 3 of the hDBscan algorithm cannot be differentiated in the plot of the CA (green and blue), the differences are ‘hidden’ in the other dimensions.
Both the clustering resulting from an hdbscan
applied to the Jaccard and Ochiai difference measures seem to make sense. A worthwhile question is: how do the two actually differ?
Beyond the identification of 3 clusters from the Jaccard matrix and 4 clusters from the Ochiai matrix, here the points are marked where either:
Here, it is clearly visible that the clustering in the left hand portion of the CCA is more or less the same, regardless of which distance measure is utilized (see also the section on the simple matching distance). More differences are apparent in what appears to be the more visually “dense” right hand portion of the CCA. This supports the suggestions made previously that a significant amount of variability is “hidden” in this latter portion of the plot.
As explained above, categorical or qualitative data sets consists of descriptive and non-numerical categories.
For categorical data we use the Hamming distance, which is a simple matching rule, where the Similarity S is 1 (\(s_j = 1\)) when there is agreement and the Similarity is zero (\(s_j = 0\)) when there is disagreement. If both entries have zeros, they are treated as a similarity. It is being converted to Hamming Distance according to \(D = 1-S\), so in a way the number of differing entries is being counted and normalized by dividing through their range. A more detailed explanation can be found above.
As categorical archaeological data we will use the dataset “DartPoints” from the archdata package (Carlson et al. 2018). The dataset consists of metrical and categorical measurements on 91 Archaic dart points recovered during surface surveys at Fort Hood, Texas (Carlson et al. 1987). They represent five types. As we want to show the analysis of categorical data, we will shorten this dataset to consist only of the categorical variables.
data("DartPoints")
DP <- DartPoints[, c(1, 12:17)]
In order to apply the Hamming distance to our “DP” dataframe, we use the “Functional Diversity” R package called FD
(and advise against using the Package e1071), in which the calculation of Gower Distance has been implemented by Etienne Laliberté, Pierre Legendre and Bill Shipley (2015). The function is named gowdis()
. In it a missing value automatically changes the weight \(w_j\) to 0 (Legendre et al. 2012, 280).
You apply the Hamming distance using this code:
library(FD)
DP_h <- gowdis(DP)
Then, the distance matrix called DP_h
is used in the hdbscan
-function, from the dbscan
package (Hahsler et al. 2019). We set the minimal cluster size in the argument called minPts. You may try several values; here, 6 is an adequate minimal cluster size.
library(dbscan)
DP_hdb <- hdbscan(DP_h, minPts = 6)
If you look in the DP_hdb
matrix, you will find that two clusters have been identified. More information is available by taking a look at the reduced hierarchy dendrogram.
plot(DP_hdb, show_flat = TRUE)
If we want to plot these groups, we need to visualise them in 2D space. We therefore take the measurements of the dart points width and length. These measurements haven’t been used for the cluster algorithm. Therefore we can have a look, whether the groups we found with hDBscan have any relation to the size of the points.
plot(DartPoints$Length, DartPoints$Width, col = DP_hdb$cluster + 1, pch = 21)
As you can see, the two groups seem to be slightly related to the size of the dart points: There are smaller dart points and larger ones, but only rarely very large dart points belong to the green group. The red group does not spread as far as the green one, all of the members of this group are medium-sized. It is important to note though, that the size is not a helpful criterium to differentiate these two groups, because they overlap completely.
Good job on making it to the end of this tutorial!
In this part you learned about hDBscan, a multi-variate clustering algorithm and how using different distance measurements may influence your output.
We would be happy to hear from you if you have any suggestions or remarks on this tutorial ! Cheers !
Carlson, D.L. 2017. Quantitative methods in archaeology using r. Cambridge University Press.
Carlson, D.L., & Roth, G. 2018. Archdata: Example Datasets from Archaeological Research. Available at: https://CRAN.R-project.org/package=archdata [Accessed March 10, 2020].
Carlson, H.B.E., S. B., & Young, D.E. 1987. Archaeological survey at fort hood, texas fiscal year 1984. United States Army Fort Hood.
Dray, D., Stéphane. 2020. Package ‘ade4’. Available at: https://cran.r-project.org/web/packages/ade4.
Hahsler, M., Piekenbrock, M., & Doran, D. 2019. dbscan: Fast density-based clustering with R. Journal of Statistical Software 91(1): 1–30.
Höhn, B. 2002. Die michelsberger kultur in der wetterau. Habelt.
Laliberté, L., Etienne. 2015. Package ‘fd’. Available at: https://cran.r-project.org/web/packages/FD.
Legendre, P., & Legendre, L. 2012. Numerical ecology. Amsterdam: Elsevier.
Lüning, J. 1967. Die Michelsberger Kultur: Ihre Funde in zeitlicher und räumlicher Gliederung. Berichte der Römisch-Germanischen Kommission 48.
Oksanen, J. et al. 2019. Vegan: Community ecology package. Available at: https://CRAN.R-project.org/package=vegan.