Thinking it through - G-String-Legacy/GS_MV GitHub Wiki

Return

Before jumping into G_String, it is worth while to organize one's thoughts:

  1. What, or who is the object of measurement?

It is usually a person, a thing or organization being assessed for her or its quality. In generalizability jargon, this is referred to as the Facet of Differentiation.

  1. What attribute or quality of the facet of differentiation is to be evaluated?

This can be concrete, such as height, weight, age, or more abstract, such as intelligence, skill, or knowledge.

  1. What is the yard stick for this assessment, namely type of the measured data, to feed to G_String? Is it binary (yes/no etc), ordinal, integer, or decimal?

If the data are ordinal, integer or decimal, they all have to measure one identical dimension and range (e.g. 'height', 'competence', etc, and some kind of median value must be meaningful).

  1. This yard stick is applied to the performance quality on specific, exemplary tasks performed by the facet of differentiation, i.e. the object of interest.

In the generalizability jargon, this task is part of the Facets of Generalization.

  1. Besides the actual task itself, other conditions can affect the assessed performance quality as well, such as the conditions, under which the assessment took place, specific aspects of the task challenge, or the person who rated the performance score.

Such conditions are treated as additional Facets of Generalization.

  1. Often the facet of differentiation too exhibits relevant classifiable properties, such as age, gender, school etc.

In generalizability jargon, such properties are referred to as Facets of Stratification. A special case arises, when facets of generalization are nested within facets of stratification: in this case, the final scores are no longer absolute, but relative within each strata of the facets of stratification!

  1. The term nested implies that the meaning of a subclass differs from that of a corresponding subclass in another, encompassing class. For example, if the performance raters are drawn from the same school as the candidates, or different questions for a given task challenge.