Compartir
Título
Can credibility criteria be assessed reliably? A meta-analysis of criteria-based content analysis.
Autor(es)
Palabras clave
Meta-analysis
Criteria-based content analysis
Interrater reliability
Credibility assessment
Detection of deception
Fecha de publicación
2017
Resumen
[EN]This meta-analysis synthesizes research on interrater reliability of Criteria-Based Content Analysis
(CBCA). CBCA is an important component of Statement Validity Assessment (SVA), a forensic
procedure used in many countries to evaluate whether statements (e.g., of sexual abuse) are based on
experienced or fabricated events. CBCA contains 19 verbal content criteria, which are frequently adapted
for research on detecting deception. A total of k 82 hypothesis tests revealed acceptable interrater
reliabilities for most CBCA criteria, as measured with various indices (except Cohen’s kappa). However,
results were largely heterogeneous, necessitating moderator analyses. Blocking analyses and metaregression
analyses on Pearson’s r resulted in significant moderators for research paradigm, intensity of
rater training, type of rating scale used, and the frequency of occurrence (base rates) for some CBCA
criteria. The use of CBCA summary scores is discouraged. Implications for research vs. field settings, for
future research and for forensic practice in the United States and Europe are discussed.
URI
ISSN
1040-3590
DOI
10.1037/pas0000426
Collections
- PSIJU. Artículos [45]