The correct answer is interrater
Is the degree of agreement between the evaluators. It is a count of how much homogeneity, or consensus, exists in the rates presented by several judges. In contrast, intra-observer is a score of consistency in data provided by the same person in multiple instances. Between evaluators and intra-evaluator requirements are aspects of test validity. How useful estimates are refined as tools given to humans, for example, determining whether a specific scale is appropriate for a particular variable. If several evaluators do not agree, either the scale is defective or the evaluators need to be re-trained.