Inter-Annotator Agreement (IAA)

Pair-wise Cohen kappa and group Fleiss’ kappa (𝜅) coefficients for categorical annotations

Louis de Bruijn
Towards Data Science
4 min readJul 18, 2020

--

In this story, we’ll explore the Inter-Annotator Agreement (IAA), a measure of how well multiple annotators can make the same annotation decision for a certain category. Supervised Natural Language Processing algorithms use a labeled dataset, that is often annotated by humans. An example would be the annotation scheme for my master’s thesis, where tweets were labeled as either abusive or non-abusive.

--

--