Classification of Distal Radius Fractures in Children
We designed this study to comply with the guidelines for reliability studies of fracture classification systems outlined by Audigé, Bhandari and Kellam. We included the first 105 consecutive distal radius fractures in children below the age of 16 years treated at our institution in 2007. Where indicated, information on follow ups was retrieved from the electronic journal. The radiographs were identified via our computerized files and checked by the authors. Fractures not involving the metaphysis according to the AO Pediatric Comprehensive Classification of long bone fractures were considered to be diaphyseal fractures and were excluded from the study. No radiograph was excluded due to poor quality, to avoid selection bias. Standard anterior-posterior and lateral radiographs of the distal radius were reviewed independently by 12 different observers: four junior orthopedic residents with a mean experience of fracture management of 14 months (6–28), four senior orthopedic registrars with an average experience in fracture management of 41 months (30–49) and four experienced orthopedic surgeons. In our institution, the pediatric distal radius fractures are generally managed by the junior registrars, who are supervised by the senior registrars. The orthopedic consultants are only occasionally involved in the management of these fractures.
Each fracture was classified to one of four possible categories; buckle (or torus), greenstick, complete or physeal fracture. The physeal fractures were not subclassified further. Before rating the radiographs, the observers were given schematics of the different fracture types and a written introduction to the difference between the categories. Further instructions to enhance results were not given. The radiographs were reviewed by the observers at two occasions 3 months apart. The raters were blinded to clinical information regarding the patients. The observers were not given any feed-back between the observations, and the order of the fractures was randomly changed before the second rating.
The statistical analyses were performed using the free software R version 2.9.2 and the associated package irr. Kappa statistics were used to analyze interobserver and intraobserver variation. The Kappa value is a coefficient of agreement between observers, correcting for the proportion of agreement that could have occurred by chance. Fleiss introduced a category-specific kappa score and a kappa score that can be employed when there are more than two observers, as is the case in our study. For intraobserver variation we used Cohens kappa. A kappa score of 1 indicates perfect agreement and a score of zero indicates that the variation in agreement can be explained purely by chance. Several authors have provided guidelines to the interpretation of kappa scores (Table 2).
Methods
We designed this study to comply with the guidelines for reliability studies of fracture classification systems outlined by Audigé, Bhandari and Kellam. We included the first 105 consecutive distal radius fractures in children below the age of 16 years treated at our institution in 2007. Where indicated, information on follow ups was retrieved from the electronic journal. The radiographs were identified via our computerized files and checked by the authors. Fractures not involving the metaphysis according to the AO Pediatric Comprehensive Classification of long bone fractures were considered to be diaphyseal fractures and were excluded from the study. No radiograph was excluded due to poor quality, to avoid selection bias. Standard anterior-posterior and lateral radiographs of the distal radius were reviewed independently by 12 different observers: four junior orthopedic residents with a mean experience of fracture management of 14 months (6–28), four senior orthopedic registrars with an average experience in fracture management of 41 months (30–49) and four experienced orthopedic surgeons. In our institution, the pediatric distal radius fractures are generally managed by the junior registrars, who are supervised by the senior registrars. The orthopedic consultants are only occasionally involved in the management of these fractures.
Each fracture was classified to one of four possible categories; buckle (or torus), greenstick, complete or physeal fracture. The physeal fractures were not subclassified further. Before rating the radiographs, the observers were given schematics of the different fracture types and a written introduction to the difference between the categories. Further instructions to enhance results were not given. The radiographs were reviewed by the observers at two occasions 3 months apart. The raters were blinded to clinical information regarding the patients. The observers were not given any feed-back between the observations, and the order of the fractures was randomly changed before the second rating.
Statistics
The statistical analyses were performed using the free software R version 2.9.2 and the associated package irr. Kappa statistics were used to analyze interobserver and intraobserver variation. The Kappa value is a coefficient of agreement between observers, correcting for the proportion of agreement that could have occurred by chance. Fleiss introduced a category-specific kappa score and a kappa score that can be employed when there are more than two observers, as is the case in our study. For intraobserver variation we used Cohens kappa. A kappa score of 1 indicates perfect agreement and a score of zero indicates that the variation in agreement can be explained purely by chance. Several authors have provided guidelines to the interpretation of kappa scores (Table 2).
SHARE