A Case of Good Design

The most important skill in rehabilitation is evaluating the outcome of interventions with each individual. The most important knowledge in rehabilitation is how to systematically conduct such evaluations using a single case experimental design.

This may not be self-evident. No doubt there would be dissenters to these statements. And clearly, there are many other skills and much other knowledge that we would expect in any competent rehabilitation practitioner. For instance, the ability to skillfully assess capacity and impairment, and contextualize these in relation to community integration, is important. An awareness of evidence-based approaches to rehabilitation, and the ability to effectively bring knowledge from their particular professional background to bear on implementing these, is likewise important. Yet, as a field we're now well aware the limitations of randomized trials—in particular, that effects observed on a group basis in carefully selected sub-populations can only get us so far in knowing what will work for a specific person at a specific place and time. So while clinicians consider the available evidence, draw on knowledge and experience, and provide the best interventions we can, we need something more.

As an undergraduate psychology student I completed a course on psychophysics. In one laboratory session in an old house off the outskirts of campus, the professor demonstrated the next task. Placing a pair of heavy, sound-isolating headphones on, he spoke into a microphone. His voice was returned through the headphones and he spoke normally. A switch was then thrown, introducing a delay of a couple of seconds into the audio feedback he was receiving. Coherence dissolved. His speech became slurred, halting, irregular—barely interpretable at times. Removing the headphones, he explained that fluent human speech is predicated on the continuous feedback mechanism inherent in listening to our own voice. When this is disrupted, we are thrown off track. He noted that some are more effected by this than others, and that his wiring left him particularly susceptible. We were invited to experience derailment.

What matters most in rehabilitation is long term outcomes, yet in many cases clinicians receive only scant information about whether their interventions continued to work in the short term beyond discharge, let alone over the years beyond. In essence, in clinical practice our feedback loop often appears to be disrupted. Goal setting and outcome evaluation are now appropriately commonplace in rehabilitation services. However, pre- and post-treatment assessments in themselves are insufficient to systematically demonstrate that an intervention was the cause of any changes observed for the person receiving rehabilitation. We know that there are factors such as spontaneous recovery that are directly correlated with time, thus time spent in our rehabilitation services, and thus progress through treatment. To close the feedback loop, we need a way to rigorously demonstrate whether an intervention worked, with this specific person. Fortunately, that methodology already exists: the single case experimental design.

Work in this area is being pursued most vigorously by Prof. Robyn Tate at the University of Sydney and various colleagues in Australia. They have been iterating a measure to distinguish high quality, well controlled single case research designs from qualitative case studies and steps in between. Initially published in 2008 as the Single Case Experimental Design (SCED) scale (available for download from psychbite.com), while a revised version renamed the Risk of Bias in N-of-1 Trials (RoBiN-T) is in development, and adds further sophistication. An evolution of the PEDro-P scale for evaluating Randomized and Non-Randomized Controlled Trials, these scales were designed for evaluating the quality of N-of-1 trials. Beyond this, these scales provide an overview of the considerations in designing a good single case study from the outset. As a result, this work is compulsory reading for all my postgraduate students.

Implementing single case study experimental designs in clinical practice will not necessarily be straightforward. The Neuropsychological Rehabilitation SIG of the WFNR had their annual conference in Bergen, Norway in July 2012. Among much good work presented there, Dr Henk Eilander of the Department of Brain Injury, Huize Padua, Netherlands, presented their poster, Feasibility of single-case study designs to evaluate neuropsychiatric treatment of behaviour disorders after severe acquired brain injury. (Dr Eilander was good enough to allow the poster to be hosted on this site so you could access the full content of the poster.) A key conclusion: "Although case studies are feasible in a clinical setting with limited resources, the naturalistic character of this study as well as the inexperience with systematic research resulted in too much variability to be able to draw firm conclusions on the effects of the medication." My reflection from their valuable study—we clearly need to be devoting (even diverting) resources to develop these evaluation skills in our frontline clinical services.

Clinicians need powerful, simple-to-use tools so that it becomes an obvious choice to apply single case experimental design methodology in routine practice. At that same July conference, Robyn Tate, Dr. Michael Perdices and colleagues demonstrated their current work to develop an online training program to guide and accredit RoBiN-T raters. Through this online tool trainees are coached to correctly evaluate and rate single case experimental design studies, and compare their ratings to an expert consensus panel. While primarily designed to train people to be raters, in undertaking this training they learn the core skills to undertake such a study themselves. When available, I think this interactive tool will immediately become the best starting point to develop a grounding in single case experimental design.