However, when the trials were analysed to test three hypotheses (see below), the danger that chance and bias can lead to misleading conclusions became clear. None of the dice was biased and there was no reason other than chance for the “trials” to produce different results. The trials varied in size from five in each group (total of 10) to a total of 200, with 100 rolls of the dice for the treatment group and 100 for the control group. For each participant, this first trial was followed by a second of a different size. This was then repeated the same number of times for the control group. They then rolled their dice a specified number of times to represent the number of patients in the treatment group of a randomised trial, with each 6 recorded as a death on the form and all other numbers recorded as a survival. The study, which became known as DICE 1, focused particularly on whether the combination of chance, biased decisions about including studies in a meta-analysis, inappropriate subgroup analysis and publication bias could lead to a conclusion that Dice therapy was beneficial and could save the lives of patients in specific circumstances, even though it should have no impact whatsoever.Įach participant on the course was given a red, green or white die and asked to write their name and the colour of the die on a data form. In the early 1990s, as part of an exercise to teach doctors about clinical trials and systematic reviews, stroke doctors were asked to generate a series of simulated randomised trials of a therapy called “Dice” which, when their results were combined in meta-analyses, might have sufficient statistical power to detect a moderate treatment effect ( Counsell et al, 1994). The DICE studies attempted to illustrate this for trials and reviews, and they serve as a cautionary tale for everyone involved in the conduct or use of controlled trials and meta-analyses. The problem becomes even worse if multiple analyses are done and the one with the most striking difference, or lowest p-value, becomes elevated to become a key result of the trial ( James Lind Library 2.5). ![]() However, setting this to the traditional threshold of p=0.05 will lead to “statistically significant” differences with almost the same frequency as people rolling 11 with a pair of dice. Whether this might have happened is tested using the statistical significance of the between-group difference. Thus, even if the intervention has absolutely no additional effect compared with a control, then, purely by chance, the groups could have different average outcomes. This should not be surprising given the fundamental principle of randomised trials that, thinking about a typical 2-group individually randomised trial, participants are allocated to an intervention and a control group by chance ( James Lind Library 2.2). Over 20 years beginning in the early 1990s, a series of DICE studies have highlighted how chance could affect even a perfectly designed randomised trial, which had 100% adherence to the allocated interventions and no loss to follow-up, or a mathematically perfect meta-analysis. ![]() However, one acronym with which all trialists, reviewers and users of research should be made more familiar is DICE – “Don’t Ignore Chance Effects”. ![]() These include some from the past which might raise eyebrows now, such as ISIS ( ISIS-2, 1998) and the many attempts to incorporate “cov” into the names of trials relating to COVID-19, including the RECOVERY trial ( Glasziou and Tikkenen 2021). Researchers often name their trials using an acronym.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |