A Case of Good Design

The most important skill in rehabilitation is evaluating the outcome of interventions with each individual. The most important knowledge in rehabilitation is how to systematically conduct such evaluations using a single case experimental design.

This may not be self-evident. No doubt there would be dissenters to these statements. And clearly, there are many other skills and much other knowledge that we would expect in any competent rehabilitation practitioner. For instance, the ability to skillfully assess capacity and impairment, and contextualize these in relation to community integration, is important. An awareness of evidence-based approaches to rehabilitation, and the ability to effectively bring knowledge from their particular professional background to bear on implementing these, is likewise important. Yet, as a field we're now well aware the limitations of randomized trials—in particular, that effects observed on a group basis in carefully selected sub-populations can only get us so far in knowing what will work for a specific person at a specific place and time. So while clinicians consider the available evidence, draw on knowledge and experience, and provide the best interventions we can, we need something more.

As an undergraduate psychology student I completed a course on psychophysics. In one laboratory session in an old house off the outskirts of campus, the professor demonstrated the next task. Placing a pair of heavy, sound-isolating headphones on, he spoke into a microphone. His voice was returned through the headphones and he spoke normally. A switch was then thrown, introducing a delay of a couple of seconds into the audio feedback he was receiving. Coherence dissolved. His speech became slurred, halting, irregular—barely interpretable at times. Removing the headphones, he explained that fluent human speech is predicated on the continuous feedback mechanism inherent in listening to our own voice. When this is disrupted, we are thrown off track. He noted that some are more effected by this than others, and that his wiring left him particularly susceptible. We were invited to experience derailment.

What matters most in rehabilitation is long term outcomes, yet in many cases clinicians receive only scant information about whether their interventions continued to work in the short term beyond discharge, let alone over the years beyond. In essence, in clinical practice our feedback loop often appears to be disrupted. Goal setting and outcome evaluation are now appropriately commonplace in rehabilitation services. However, pre- and post-treatment assessments in themselves are insufficient to systematically demonstrate that an intervention was the cause of any changes observed for the person receiving rehabilitation. We know that there are factors such as spontaneous recovery that are directly correlated with time, thus time spent in our rehabilitation services, and thus progress through treatment. To close the feedback loop, we need a way to rigorously demonstrate whether an intervention worked, with this specific person. Fortunately, that methodology already exists: the single case experimental design.

Work in this area is being pursued most vigorously by Prof. Robyn Tate at the University of Sydney and various colleagues in Australia. They have been iterating a measure to distinguish high quality, well controlled single case research designs from qualitative case studies and steps in between. Initially published in 2008 as the Single Case Experimental Design (SCED) scale (available for download from psychbite.com), while a revised version renamed the Risk of Bias in N-of-1 Trials (RoBiN-T) is in development, and adds further sophistication. An evolution of the PEDro-P scale for evaluating Randomized and Non-Randomized Controlled Trials, these scales were designed for evaluating the quality of N-of-1 trials. Beyond this, these scales provide an overview of the considerations in designing a good single case study from the outset. As a result, this work is compulsory reading for all my postgraduate students.

Implementing single case study experimental designs in clinical practice will not necessarily be straightforward. The Neuropsychological Rehabilitation SIG of the WFNR had their annual conference in Bergen, Norway in July 2012. Among much good work presented there, Dr Henk Eilander of the Department of Brain Injury, Huize Padua, Netherlands, presented their poster, Feasibility of single-case study designs to evaluate neuropsychiatric treatment of behaviour disorders after severe acquired brain injury. (Dr Eilander was good enough to allow the poster to be hosted on this site so you could access the full content of the poster.) A key conclusion: "Although case studies are feasible in a clinical setting with limited resources, the naturalistic character of this study as well as the inexperience with systematic research resulted in too much variability to be able to draw firm conclusions on the effects of the medication." My reflection from their valuable study—we clearly need to be devoting (even diverting) resources to develop these evaluation skills in our frontline clinical services.

Clinicians need powerful, simple-to-use tools so that it becomes an obvious choice to apply single case experimental design methodology in routine practice. At that same July conference, Robyn Tate, Dr. Michael Perdices and colleagues demonstrated their current work to develop an online training program to guide and accredit RoBiN-T raters. Through this online tool trainees are coached to correctly evaluate and rate single case experimental design studies, and compare their ratings to an expert consensus panel. While primarily designed to train people to be raters, in undertaking this training they learn the core skills to undertake such a study themselves. When available, I think this interactive tool will immediately become the best starting point to develop a grounding in single case experimental design.

Coulter Memorial Lecture

Neuroscience is providing us with remarkable data on brain functioning. A reductionist dilemma. Everything from chemistry to sociology could be explained in terms of physics, at least theoretically. Yet, "I ran the car into the wood chopping block" is a far more evocative story than any precise description of the physics of the event. While supplying a complex mathematical equation might most precisely define the effect on the car, it is the story that enables my wife to picture the damage I have caused. Ah, but what if the story was, "I think I ran the car into the wood chopping block"? And (humor me here) what if you couldn't go and look at the car to check? A description of the forces on the car would help separate a liaison with a former tree from the bump of just running over a pothole.

Dr. Keith Cicerone presented the Facts, Theories, Values: Shaping the course of neurorehabilitation. The 60th John Stanley Coulter Memorial Lecture. Many valuable points in that lecture you'll need to read yourself. He observed, however, our growing understanding of neuroplasticity, informed primarily by advances in neuroscience:
The roles of behavioral variability and predictability are central to recent investigations of executive functioning in relation to the frontal lobes. In this framework, executive functioning (task setting and monitoring) is related to the dorsolateral frontal cortex; emotional and behavioral regulation is related to the medial and lateral orbitofrontal cortex; and the rostral prefrontal cortex is related to "metacognition" involving the integration of motivational, emotional and cognitive activities. These same processes—involving the supervisory attentional system and anticipatory neural network—are central to the rehabilitation of cognitive impairments through meta-cognitive strategy training, characterized by interventions to foster anticipation and planning, response monitoring, and self-evaluation. This may represent at least the beginning of a theory of cognitive remediation that integrates neurologic, neuropsychological, and rehabilitation concepts and mechanisms…

In our own work, we have suggested that meta-cognitive strategy training directed at improving patients' self-regulation of both cognitive and emotional processes leads to increases in patients' self-efficacy beliefs, specifically in their confidence in managing residual cognitive and emotional symptoms. Improvements in perceived self-efficacy (and related concepts, such as maintaining a positive problem orientation) are directly related to positive outcomes, particularly patients' subjective well-being and life satisfaction.

This approach to rehabilitation puts the patient's subjective experience and beliefs at the center of the rehabilitation process.

Returning to physicists vs. chopping blocks, our neurorehabilitation practice often seems most like the uncertain story—"I think… perhaps it could be..." Our formulations of the difficulties facing the people we work with are informed by many things. We draw on theory, on observation, on formal assessments, on information about the impairment to the brain, and hopefully by listening to the person's own experiences and perceptions, as well as those of the people around them. We make our best attempts to draw a connection between these explanations and our knowledge of what works in neurorehabilitation interventions. And then we do whatever it takes. And that's not so bad.

A clearer neurobiological model of brain processes after impairment, and how they change in rehabilitation, could plausibly turn much of this on its head. We might discover some chopping blocks were potholes, and some potholes were something else altogether. But a clinician can't solely be a physicist (or a neuroscientist). Higher level subjective human experience is imbued with meaning not circumscribed by neurobiology. It seems to me that in order to apply advanced neuroscience insights into neurorehabilitation practice, we're going to need a much clearer map to guide us.

Real artists ship

In a recent paper in Neuropsychological Rehabilitation, PDA and smartphone use by individuals with moderate-to-severe memory impairment: Application of a theory-driven training programme, Svoboda, Richards, Leach and Mertens describe the provision of mobile computing technology to ten people with acquired brain injury. Their participants engaged well with the technology and a rigorous ABAB design provided strong evidence that the intervention was effective for these participants. In somewhat of a departure from usual practice, this paper does not state the smartphone or PDA models that were used by the ten participants. The authors do reference individual case studies they had previously published about two of their participants. One of those case studies described the use of a Treo 680, while the more recently published case study did not report the actual technology used. In the absence of other information it seems likely that first generation smartphones like the Treo 680 were the newest devices used in the trial, and references to PDAs indicates some older Palm Pilot devices were also likely used. Perhaps device models were not described as the authors saw this as a potential distraction from the results, which they felt remain applicable to more recent technology.

I agree that findings based on earlier devices should certainly still be considered relevant, a point I make in a chapter in the forthcoming Oxford Handbook of Clinical Geropsychology, and which was also discussed by Gillespie, Best and O'Neill (2012). However, it remains possible that future research will demonstrate important differences in the effectiveness of one generation of technology over another. It is a pity that a clear description of the technology was not provided by Svoboda et al.—it was, after all, integral to the intervention. Even if reporting somewhat older technology, they remain in good company. At talks given to both the APS College of Clinical Neuropsychologists in Brisbane and to the University of Queensland School of Psychology last week I noted that as far as I am aware, no study to date has been published in neurorehabilitation that describes the use of modern touchscreen smartphones. This is despite it now being over five years since the Apple iPhone redefined this market in 2007. While studies with touchscreen devices are no doubt coming, this starkly reflects our disconnect: compared to the rate of evolution in technology, academic research is glacial.

The authors placed an emphasis in this paper on the training approach they used. Svodoba et al. used relatively intensive clinician input—an average of eight hours of individual training with the device per participant—and an errorless fading-of-cues protocol to train their participants to use their devices. (This is similar to the approach described in the Australian study that was the focus of the most recent Synapse Voices podcast with Belinda Carr and Natasha Lannin.) The training approach appeared well-considered and suitably based on theory and past research. It is worth reflecting that the authors did not actually test their belief that this rigorous training approach was necessary to generate the positive changes observed in their participants. The methodology used in the paper is only capable of demonstrating that such an intervention was sufficient to generate the change. It is not impossible that a less intensive training approach may have been equally effective, something it would be useful to examine in a future study.

Svodoba et al. are to be commended for using mainstream, shipping technology in their research. More than two decades of research into customized mobile computing devices as cognitive prosthetics has resulted in precious few devices or services that can be used in further studies by other research teams, let alone products that can actually be issued to clients to assist with their day-to-day difficulties. Steve Jobs, founder of Apple Inc., is reputed to have said, "Real artists ship". Though famed for a perfectionist attention to product design and interface detail, Steve Jobs understood something that too few people do—that value is only generated when a product is actually shipped for widespread use. By focussing their research on mainstream technology and services, Svodoba et al. are setting an example that all of us should follow. Rehabilitation needs actual shipping products.

Washing our hands of it all...

Dijkers, Murphy and Krellman, Evidence-Based Practice for Rehabilitation Professionals: Concepts and Controversies, was released online in the Archives of Physical Medicine and Rehabilitation last week. They trace four decades of change in what is regarded as evidence, and two decades of our field attempting to employ evidence based practice. The authors review the various criticisms of evidence based practice, demonstrating they largely hold little validity and should not be barriers to applying evidence to our practice. They note, however:
"...knowledge of the principles and procedures of EBP alone may not be enough to actually use research evidence in practice. McCluskey and Lovarini tested the effectiveness of a 2-day interactive workshop in changing knowledge of and attitudes toward EBP, and tracked patterns of implementing EBP (eg, searching literature, appraising research, and using it in practice) for 8 months after the workshop. Although knowledge and attitudes significantly improved over time, there was very little implementation."

Innovation guru Tim Kastelle this week described handwashing in hospitals as the most important innovation ever:
"Oliver Wendell Holmes in the 1840s was one of the first people to suggest that hand-washing could reduce infections. Not many people paid attention.... There was about a seventy year gap between Semmelweis proving that hand washing saves lives until the practice was widely accepted. Even today, in many hospitals less than half of the health care practitioners follow the right procedures for hand washing."

Dijkers, Murphy and Krellman's paper is worth the attention of every reader. It is likely we all have some less than rigorous rehabilitation practices that we should wash our hands of. In that vein, Dijkers et al. offer specific, practical strategies that may assist. But if the experience of hand washing tells us anything, it is that mere knowledge of an effective intervention—even intimate knowledge—and the clear capacity to implement the intervention, are not alone sufficient to generate lasting, consistent behavior change.

At the time of original posting, this article was not accessible via its DOI. Until that was corrected, this update originally used this alternative link instead...

Syndicate content