Percent Agreement in R

Percent agreement is a common metric used in research to determine the level of agreement between two or more raters or coders. It is calculated by dividing the number of agreements by the total number of cases and multiplying by 100.

In R, the easiest way to calculate percent agreement is to use the function `table()`. This function creates a two-way table of the observations and computes the number of agreements and disagreements.

For example, let’s say we have two coders rating the same set of data. We can use the following code to calculate percent agreement:

“`

#Create the data

coder1 <- c(1, 2, 3, 1, 1, 2, 3, 2, 3, 1)

coder2 <- c(1, 2, 3, 1, 2, 2, 3, 1, 3, 1)

#Calculate percent agreement

agree <- sum(coder1 == coder2)

total <- length(coder1)

percent_agree <- (agree / total) * 100

percent_agree

“`

The output of this code will be 70%, indicating that the two coders agreed on 70% of the cases.

We can also use the `kappa2()` function from the `irr` package to calculate Cohen’s kappa, which is another common measure of agreement between two coders. Kappa takes into account the possibility of agreement occurring by chance.

“`

#Calculate Cohen`s kappa

library(irr)

kappa2(matrix(c(coder1, coder2), ncol=2))

“`

The output of this code will be 0.5, indicating moderate agreement between the two coders.

In conclusion, percent agreement is a useful metric for determining the level of agreement between two or more raters or coders. In R, calculating percent agreement is easy using the `table()` function, and we can also use the `kappa2()` function to calculate Cohen’s kappa, which takes into account the possibility of agreement occurring by chance.

Scroll to Top