Talk:Fleiss' kappa
Jump to navigation
Jump to search
Comments
The first section states
"It is a measure of the degree of agreement that can be expected above chance. Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are."
This seems a bit awkward to me. Would it be equivalent to say something like the following?
"It measures to what extent the raters are more in agreement than would be expected if they assigned ratings randomly."
I leave this here as I'm not certain this is what you want to say. (I don't know enough statistics to be much more than a copyeditor on this) Simen Rustad 15:12, 21 November 2006 (CST)