Since running an attribute agreement analysis can be time-consuming, expensive, and usually inconvenient for everyone involved (the analysis is simple compared to running), it`s best to take a moment to really understand what needs to be done and why. This example uses a repeatability score to illustrate the idea, and it also applies to reproducibility. The point here is that many samples are needed to detect differences in an attribute agreement analysis, and if the number of samples is doubled from 50 to 100, the test does not become much more sensitive. Of course, the difference that needs to be recognized depends on the situation and the level of risk that the analyst is willing to bear in the decision, but the reality is that with 50 scenarios, an analyst will have a hard time assuming that there is a statistical difference in the repeatability of two evaluators with match rates of 96% and 86%. With 100 scenarios, the analyst will hardly be able to see a difference between 96% and 88%. The audit should help determine which specific people and codes are the main sources of problem, and the assessment of the attribute agreement should help determine the relative contribution of repeatability and reproducibility issues to those specific codes (and individuals). In addition, many bug tracking databases have problems with precision records that indicate where an error was created because the location where the error is found is saved, not the location where the error was created. Where the error is found doesn`t help much in identifying the causes, so the accuracy of the site assignment should also be an element of the audit. Despite these difficulties, performing an analysis of attribute agreements on bug tracking databases is not a waste of time. In fact, it is (or can be) an extremely informative, valuable and necessary exercise.
The analysis of award agreements should only be applied with caution and with a certain objective. However, an insect tracking system is not a plant of continuous breeding. .