Described as “one of the simplest, most powerful and revolutionary tools of research”, the randomised controlled trial (RCT) has yielded a great deal of important information in the health sciences. It is usually held up as the “gold standard” for gathering medical evidence.
The RCT can tell us which procedure or treatment is more effective under tightly controlled situations. This evidence is useful and important, but we also need to know things like what people want from health services, which treatments are preferred, and why some people stick to treatment regimes and some people don’t.
These issues are particularly relevant to remote Australia and Aboriginal and Torres Strait Islander health, where high levels of illness and early death persist, and where what applies to the tightly controlled conditions of a laboratory rarely translates.
The government is rolling out its A$40 million plan to evaluate Indigenous health programs. The Evidence and Evaluation Framework aims to strengthen reporting, monitoring and evaluation for programs and services provided to Indigenous Australians.
As Indigenous Affairs Minister Nigel Scullion said last year:
When you don’t know anything about any of the programs, then you’re just relying on gut feelings, and that’s not good enough.
So, the framework will provide information about where government money is being spent, what works and why. However, from a Western biomedical perspective, the randomised controlled trial is afforded an elevated position in establishing what works and why. While some recommend using RCTs to evaluate Indigenous programs, it is critical to keep in mind why this form of evidence-gathering is not always appropriate in this context.
Randomised controlled trials aren’t real life
In health and medical research, the RCT involves randomly assigning people to different groups and giving the groups different treatments. The random allocation to groups precludes there being systematic differences between participants at the start of the study.
At the end of the study, any differences between the groups can be attributed to the treatment and not some other factor. RCTs, therefore, are an elegant and efficient way of ruling out competing explanations for an observed effect.
However, research participants and scenarios in randomised controlled trials are often unlike the patients and settings to which the evidence will ultimately be applied. For example, RCTs have demonstrated that psychological treatments delivered through the internet can be effective for a wide range of disorders. But in real-world settings, adherence rates to internet treatments are very low, so the RCT result has little practical meaning.WESTERN DESERT/AAP
The issue of which particular outcome should take priority can also be difficult to resolve through the RCT approach to research. Most RCTs prioritise the clinical perspective, such as a measurable change in a particular health outcome. However, there can be a mismatch between what doctors view as success and what patients and their loved ones perceive as a positive outcome following drug or other forms of treatment.
For example, it is known anecdotally in Alice Springs that some Aboriginal Australians who could benefit from kidney dialysis treatment prefer, instead, to go back to their community to be on country. While this can be detrimental to their physical health, it has important cultural significance for them.
The RCT approach in this situation would undoubtedly demonstrate the health benefits of kidney dialysis. But understanding this problem in the context of real lives requires different methodologies. Unless we design research programs to consider why people would rather stay on country than receive effective health treatments, Aboriginal health may not improve.
How best to gather evidence
Valuable work can be conducted by health professionals and service providers collecting data during their regular daily activities. The model of the “scientist-practitioner” often observed in clinical psychology could be applied to great effect in remote Australia.
This model promotes a seamless transition between science and practice in which the individual is both researcher and clinician. Scientist-practitioners adopt a critical stance to their clinical practice and routinely demonstrate, through evaluation, the value of the service they are providing.
Such a model was used in a GP practice in rural Scotland. Here, they found one simple change in how appointments were scheduled almost doubled the number of patients (in a six-month period) able to access a psychology service within a reasonable time after referral from their GP.
Rather than clinicians advising patients when to attend the next appointment, systems were organised so patients booked appointments in the same way they would to see a GP. The changes were quantified by clinician-researchers who collected these data in the course of their routine clinical practice.
After this change, patients were able to access the service within two weeks of being referred, rather than waiting for seven months as had been the case. Access to services is typically problematic in rural areas, so discovering a cost-effective means of improving access is an important outcome.
The results were so substantial and sudden that they were unequivocal. A large expensive RCT wasn’t necessary to demonstrate this simple change had made important improvements.
This sort of approach could easily be applied in remote Australian settings. An RCT is not the only way, nor even the best way in all situations, to eliminate alternative reasons for the treatment outcomes obtained. Many important questions are ignored or refashioned inappropriately when only one methodology predominates.
Especially in the area of Indigenous health, the health and medical community must be guided by what patients want, not just by what health professionals know how to do.
Authors: Tim Carey, Professor, Director of the Centre for Remote Health, Flinders University