File
Enthoven P1, Tseli E2, Äng B3, Boersma K4, Stålnacke B-M5, Gerdle B6, Grooten W3
1Linköping University, Department of Medical and Health Sciences, Linköping, Sweden, 2Karolinska Institute, Department of Neurobiology, Care Sciences and Society, Stockholm, Sweden, 3School of Education, Health and Social Studies, Dalarna University, Falun, Sweden, 4Örebro University, Örebro, Sweden, 5Umeå University, Department of Community Medicine and Rehabilitation, Rehabilitation Medicine, Umeå, Sweden, 6Linkoping University, Pain and Rehabilitation Centre, and Department of Medical and Health Sciences, Linköping, Sweden
Background: Lately, many studies have been performed to identify prognostic factors important for outcome prediction after rehabilitation of patients with chronic pain and there is a need to synthesize these through systematic reviews. In this process, it is central to assess study quality and the risk of bias (RoB) and a new tool, QUIPS (Quality In Prognosis Studies), has been developed for this purpose. QUIPS consists of a number of prompting items categorized into six domains and each domain is judged on a three-graded scale (low, moderate or high RoB). Presently, there is only few data on interrater agreement available.
Purpose: The aim of the present study was to determine the interrater agreement of the RoB assessment in prognostic studies of patients with chronic pain using QUIPS, and to elaborate on QUIPS.
Methods: We performed a meta-analysis of prognostic factors that could influence multiple long-term outcomes after multidisciplinary rehabilitation in patients with chronic pain. The RoB was assessed in 43 published papers by two researchers (raters) in two rounds (15 and 28 papers, respectively). The interrater agreement and Cohen's weighted kappa coefficient (k) and 95% confidence interval (CI) were calculated across all papers (258 domains), and for Round 1 (90 domains) and Round 2 (168 domains), separately. Disagreement between the raters was discussed and resolved by consensus in a final discussion.
Results: The raters agreed in 157 out of 258 domains (61%), with lower interrater agreement (59%: 53/90) in Round 1 compared to Round 2 (62%: 104/168). The overall weighted kappa coefficient (kappa for all domains and all papers) was weak: k = 0.47 (95% CI = 0.35 - 0.60). A “minimal agreement” between the rater was found in Round 1; k = 0.32 (0.13 - 0.52), but increased to “weak agreement” in Round 2; k = 0.54 (0.39 - 0.68).
Maximal disagreement was not caused by one of the raters judging consequently lower or higher RoB than the other rater, hence there was no systematical difference in rating style.
Conclusion(s): Despite relative low interrater agreement, QUIPS proved to be a useful tool for assessing the risk of bias when performing a meta-analysis of prognostic studies in pain rehabilitation, since it “forces” the raters to look at important aspects of study quality. Disagreement between the raters was easily resolved by consensus in a final discussion, but showed the importance of this discussion and study quality assessment by at least two raters. Some items were particularly hard to differentiate and a learning phase was required to increase the interrater agreement. This paper put forward suggestions for improving the tool and to avoid pitfalls during the process.
Implications: Firstly, RoB assessment of every paper included in a review should be performed by at least two raters.Secondly, raters should have an initial and continuing discussion on how to interpret the instructions, that they reach agreement on how to define the qualifiers used in the instructions, cut-off points when applicable, and agree on how to define differences between moderate and high RoB.
Keywords: Chronic pain, Inter-rater Agreement, Meta-analysis
Funding acknowledgements: AFA Insurance (project number: 140340)
Purpose: The aim of the present study was to determine the interrater agreement of the RoB assessment in prognostic studies of patients with chronic pain using QUIPS, and to elaborate on QUIPS.
Methods: We performed a meta-analysis of prognostic factors that could influence multiple long-term outcomes after multidisciplinary rehabilitation in patients with chronic pain. The RoB was assessed in 43 published papers by two researchers (raters) in two rounds (15 and 28 papers, respectively). The interrater agreement and Cohen's weighted kappa coefficient (k) and 95% confidence interval (CI) were calculated across all papers (258 domains), and for Round 1 (90 domains) and Round 2 (168 domains), separately. Disagreement between the raters was discussed and resolved by consensus in a final discussion.
Results: The raters agreed in 157 out of 258 domains (61%), with lower interrater agreement (59%: 53/90) in Round 1 compared to Round 2 (62%: 104/168). The overall weighted kappa coefficient (kappa for all domains and all papers) was weak: k = 0.47 (95% CI = 0.35 - 0.60). A “minimal agreement” between the rater was found in Round 1; k = 0.32 (0.13 - 0.52), but increased to “weak agreement” in Round 2; k = 0.54 (0.39 - 0.68).
Maximal disagreement was not caused by one of the raters judging consequently lower or higher RoB than the other rater, hence there was no systematical difference in rating style.
Conclusion(s): Despite relative low interrater agreement, QUIPS proved to be a useful tool for assessing the risk of bias when performing a meta-analysis of prognostic studies in pain rehabilitation, since it “forces” the raters to look at important aspects of study quality. Disagreement between the raters was easily resolved by consensus in a final discussion, but showed the importance of this discussion and study quality assessment by at least two raters. Some items were particularly hard to differentiate and a learning phase was required to increase the interrater agreement. This paper put forward suggestions for improving the tool and to avoid pitfalls during the process.
Implications: Firstly, RoB assessment of every paper included in a review should be performed by at least two raters.Secondly, raters should have an initial and continuing discussion on how to interpret the instructions, that they reach agreement on how to define the qualifiers used in the instructions, cut-off points when applicable, and agree on how to define differences between moderate and high RoB.
Keywords: Chronic pain, Inter-rater Agreement, Meta-analysis
Funding acknowledgements: AFA Insurance (project number: 140340)
Topic: Pain & pain management; Disability & rehabilitation
Ethics approval required: No
Institution: Karolinska Institute Stockholm
Ethics committee: Ethics Review Board Stockholm, Sweden
Reason not required: This study was part of a systemaritc review and meta-analysis
All authors, affiliations and abstracts have been published as submitted.