Talk:Fleiss' kappa: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Simen Rustad
No edit summary
 
imported>Subpagination Bot
m (Add {{subpages}} and remove checklist (details))
 
(4 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{subpages}}
==Comments==
==Comments==
The first section states  
The first section states  
Line 7: Line 9:


"It measures to what extent the raters are more in agreement than would be expected if they assigned ratings randomly."
"It measures to what extent the raters are more in agreement than would be expected if they assigned ratings randomly."
Later, you say "A K value of 1 means complete agreement". Complete agreement between the raters or complete agreement with what would be achieved by chance? I'm pretty certain it's the former, but it's not entirely clear.


I leave this here as I'm not certain this is what you want to say. (I don't know enough statistics to be much more than a copyeditor on this) [[User:Simen Rustad|Simen Rustad]] 15:12, 21 November 2006 (CST)
I leave this here as I'm not certain this is what you want to say. (I don't know enough statistics to be much more than a copyeditor on this) [[User:Simen Rustad|Simen Rustad]] 15:12, 21 November 2006 (CST)

Latest revision as of 11:41, 26 September 2007

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
To learn how to update the categories for this article, see here. To update categories, edit the metadata template.
 Definition Statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. [d] [e]
Checklist and Archives
 Workgroup category Mathematics [Categories OK]
 Talk Archive none  English language variant British English

Comments

The first section states

"It is a measure of the degree of agreement that can be expected above chance. Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are."

This seems a bit awkward to me. Would it be equivalent to say something like the following?

"It measures to what extent the raters are more in agreement than would be expected if they assigned ratings randomly."

Later, you say "A K value of 1 means complete agreement". Complete agreement between the raters or complete agreement with what would be achieved by chance? I'm pretty certain it's the former, but it's not entirely clear.

I leave this here as I'm not certain this is what you want to say. (I don't know enough statistics to be much more than a copyeditor on this) Simen Rustad 15:12, 21 November 2006 (CST)