Attribute Agreement Analysis

Between Appraisers - Kendall's Coefficient of Concordance

  

If you have multiple trials, you can assess the consistency of each appraiser's ratings across trials. With ordinal data of 3 or more levels, you can calculate Kendall's coefficient of concordance.

Kendall's coefficient of concordance expresses the degree of association among the ratings between appraisers. Kendall's coefficient of concordance uses information about relative ratings and is sensitive to the seriousness of the misclassification. For example, pie crust crispiness is rated on a 1-5 scale. The consequences of misclassifying a perfectly crispy pie crust (rating = 5) as soggy (1) are more serious than misclassifying it as mostly crisp (4).

Kendall's coefficient of concordance can range from 0 to 1. The higher the value of Kendall's, the stronger the agreement.

Use the p-values to choose between two opposing hypotheses, based on your sample data:

·    H0: There is no association between the appraisers' ratings

·    H1: Ratings between appraisers are associated

The p-value provides the likelihood of obtaining your sample, with its particular Kendall's coefficient of concordance, if the null hypothesis (H0) is true. If the p-value is less than or equal to a predetermined level of significance (a-level), then you reject the null hypothesis and claim support for the alternative hypothesis.

Example Output

Kendall’s Coefficient of Concordance

 

    Coef  Chi - Sq  DF       P

0.990056   229.693  29  0.0000

Interpretation

For the fabric data, with a = 0.05, between all appraisers, p = 0.0000, so you can reject the null hypothesis. Ratings for print quality between appraisers are associated with one another.