Penn Vet's Veterinary Clinical Investigations Center offers veterinarians, investigators, and sponsors access to a suite of research tools validated by investigators at Penn Vet through a single portal called PennCHART (Companion-animal Health Assessment Research Tools).
-
The purpose of PennCHART
- The purpose of PennCHART is to make readily available, well-validated tools for the assessment of pain, physical functioning, quality of life, and other symptoms experienced by companion animals (pets). It is only through the completion of well-designed studies that use valid and reliable outcome assessment tools that real advances can be made in the diagnosis and management of the conditions that afflict our companion animal species.
- Whether the tools are used for animal health studies, translational research or veterinary patient management, the user must know that the tools reliably measure what they are purported to measure and that they are able to detect clinically important changes in the health status of the animal. Tools available on this site have sufficient science to support their use in the recommended populations.
-
Tool Validation: General Description
Prior to using a tool, one must know that it reliably measures what it is purported to measure and that it is able to pick up clinically significant changes in the population in which it is being used. Without knowing this up front, it is impossible to know whether the results of a study obtained using an ‘unvalidated’ tool is due to the intervention being studied or due to inconsistencies in the tool.
Validity
Validity is a process of determining what, if anything, is being measured with the tool. That is, can a valid statement about an animal be made based on its score on the tool? Therefore, the validation process is directed both toward the integrity of the tool itself as well as toward the inferences that can be made about the characteristics of the animals that are scored by the tool. There are many types of validity assessment that can add to the degree of confidence that can be placed on the conclusions that are drawn from the scores on a tool:
- Face and Content Validity: These are basically judgments that the tools appear reasonable. Face validity indicates whether, on the face of it, the tool appears to be assessing the desired qualities. Content validity is a judgment whether the tool covers all of the relevant content.
- Concurrent Validity: The correlation of a tool with some other measure of the condition under study, ideally, a ‘gold standard’ which has been accepted in the field.
- Construct Validity: Testing that is used when the tool is measuring something (a construct) that cannot be directly observed (i.e. anxiety, pain, quality of life etc.). While the construct can not be seen, behaviors resulting from it can be observed. There is no one experiment which can unequivocally ‘prove’ the validity of a tool used to measure a construct, many approaches exist:
- Factor Analysis: Factor analysis is used to uncover the latent structure of a set of questions in a questionnaire. It can reveal which questions are best associated with different aspects of the construct and which are not.
- Extreme Groups: The tool is applied to two groups, one of which has the construct and the other which does not. The scores of those groups should be significantly different.
- Convergent Validity: Seeing how closely the tool relates to other measures of the same construct.
- Discriminant Validity: Seeing how the tool does not relate to measures of an unrelated construct.
-
Reliability
- The tool must measure what it is supposed to measure in a consistent and reproducible manner.
- The stability of a tool examines the reproducibility of the tool administered on different occasions.
- When the tool is a questionnaire, its internal consistency is based on a single administration of the tool and represents the average of the correlations among the questions in the tool.
- Both internal consistency and stability must be proven before a questionnaire is to be deemed reliable.
-
Responsiveness
- The responsiveness of a tool is the extent to which it accurately reflects the change in an animal's condition and discriminates between animals that change over time and those that do not. It is a critical property of the tool because without adequate responsiveness it could fail to detect a clinically significant change in an animal’s condition.
- In addition, a tool with inadequate responsiveness would not be useful for assessing the effectiveness of a treatment because it could fail to identify meaningful differences in change between treatment and control groups.
- Tests of validity and reliability do not ensure the responsiveness of an instrument. A separate assessment must be made.
About PennCHART Tools
Tools available on this site have been validated by investigators at Penn Vet and have sufficient science to support their use in the recommended populations.