Talk:Fleiss' kappa: Difference between revisions
imported>Jitse Niesen m (put catcheck to no in checklist) |
imported>Subpagination Bot m (Add {{subpages}} and remove checklist (details)) |
||
Line 1: | Line 1: | ||
{{ | {{subpages}} | ||
}} | |||
==Comments== | ==Comments== |
Latest revision as of 11:41, 26 September 2007
Comments
The first section states
"It is a measure of the degree of agreement that can be expected above chance. Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are."
This seems a bit awkward to me. Would it be equivalent to say something like the following?
"It measures to what extent the raters are more in agreement than would be expected if they assigned ratings randomly."
Later, you say "A K value of 1 means complete agreement". Complete agreement between the raters or complete agreement with what would be achieved by chance? I'm pretty certain it's the former, but it's not entirely clear.
I leave this here as I'm not certain this is what you want to say. (I don't know enough statistics to be much more than a copyeditor on this) Simen Rustad 15:12, 21 November 2006 (CST)