Open and abundant data

I've spent the last two days at The Project: Digital Disruption, a conference on how digital disruption is changing the way we live and work. (See #ProjectDisrupt on Twitter for some coverage.) Given long academic publishing lead times, it was serendipity that during the conference my editorial, "Open and abundant data is the future of rehabilitation and research", was formally published in Archives of Physical Medicine and Rehabilitation. It leads the May 2014 issue. In that piece, I argue that many of the constraints on our previous research and practice are going to radically change as data becomes both open and abundant in the future. With that will also come new challenges and opportunities. Normally I'd summarise here the primary conclusions from a paper for those readers who do not have access to the original article. However, Archives have been good enough to make my editorial open access, so it is freely available to all. So instead, I'll provide you with four selected passages from the editorial in the hopes of whetting your appetite.

On open data:
"In May 2013, the U.S. federal government mandated that both publications and datasets resulting from research they fund… must be made openly accessible in machine-readable formats."

On abundant data:
"…there is reason to believe that data will become increasingly abundant… The multiple real-time data streams that can be collected by smartphones, wearable devices, and ambient sensors embedded in the environment may soon provide us with access to more data on rehabilitation practice than we are equipped to process, let alone interpret."

On data analysis techniques:
"It seems fundamentally flawed that the resolution of the primary quantitative analysis approaches so widely understood in rehabilitation is so limited by the need to contain the risk of false positives."

In conclusion:
"…academic research appears to be moving in the direction of data openness as the default starting point, and despite competitive pressures this may catalyse innovation and progress."

The issues raised in the paper are not just relevant to researchers—they have implications for clinical settings. I believe these developments will push researchers to be (even more) engaged with real-world rehabilitation provision, and that's positive. I'd warmly encourage you to read the full paper and provide me with both feedback and any leads on taking forward the ideas raised in the paper. You can access it via the formal home for this paper on the web, or jump directly to the pdf copy of the paper.

Babbage, D. R. (2014). Open and abundant data is the future of rehabilitation and research. Archives of Physical Medicine and Rehabilitation, 95, 795-798. doi:10.1016/j.apmr.2013.12.014

Re-imagining scholarship

From observing actual academic practice it is clear that the ethical responsibility for communicating research findings is to produce peer-reviewed publications in academic journals, to speak about our research at professional conferences to our research (and sometimes clinical) peers, and to provide letters summarising findings to our research participants. More engaged academics will also “disseminate” their research findings through various broadcast mechanisms: traditional media coverage via television, radio and newspapers, providing information via websites, blog posts, and social media. The researcher role is to speak, the general audience can listen (and perhaps, to a limited degree, ‘comment’, facebook-style at the bottom of the thread). Academics, even those working in applied areas like psychology or rehabilitation, publish many papers that have at best distant relevance to the world beyond their research community. When research is relevant to practice, it takes many, many years before that research translates into differences in practice, if it ever does. Meanwhile, from my informal observation academics for the most part do not seem to regard these things as problems, or certainly not as their problems.

As a clinical psychologist who received training with a strong lean towards cognitive–behavioural explanations of behaviour, I tend to look for the contingencies acting on people’s behaviour—that is, the rewards and punishments that shape what people do. When I see something that at face value looks dysfunctional, experience has taught me to look for the ways in which that behaviour is actually functional for that person. Many countries have over the last decade or two developed systems for measuring and ranking academic departments (or in the case of New Zealand, individual researchers) in terms of their research activity or productivity. In New Zealand, this is called the Performance-Based Research Fund, over the Tasman they have the Excellence in Research for Australia system, while in the United Kingdom it is the Research Excellence Framework. These systems have been designed with the deliberate intention of rewarding research activity and productivity—to direct government funds to the places that are producing research of higher quality (and in greater quantity). The UK system for instance describes the top ‘four star’ level as being ‘quality that is world-leading in terms of originality, significance and rigour’. On the face of it, this doesn’t sound like a bad thing. Furthermore, to a greater degree than the other systems, the UK system has introduced a specific focus on the real-world impact of research. Their current six-year assessment period culminates in their assessment exercise in 2014, so we are yet to see the circle close on this new approach to be able to evaluate the effectiveness of it.

Whether the unit of assessment is a department (UK, AU) or an individual researcher (NZ), the evaluation of both quality and impact is on research—“original, world leading, paradigm shifting, internationally recognised research”. Yet the more I think about this, and the more I work with our clinical partners, the more I wonder whether we’ve lost track of the original purpose of research activity. Universities, certainly, should provide research-led teaching—something enshrined in the Education Act here in New Zealand. But in glorifying the research process, we have forgotten that research should be a means, and not the end. Indeed, it seems to me that we have systematically turned our scholars into mere researchers. A scholar is a learned person with deep knowledge in a subject area. Faced with a real-world problem to which there is a known solution, a scholar would be pleased to share their knowledge resulting in an effective improvement in the life of their community. A pragmatic researcher, in contrast, faced with a real-world problem to which there was already a known solution would arguably see no role for themselves—for even the evaluation paradigms that value the impact of research only value demonstrating the impact of my research (or “our” research, for departments). “Merely” solving a real-world problem by applying the knowledge of others does not rate highly in research assessment paradigms, and would be seen as community service and not considered research activity.

Like a “researcher", a “scholar" would have advanced research capabilities, certainly—but these would merely be one tool at their disposal. And increasingly, given the vast quantity of research-based knowledge that has not translated into practice, a scholar would surely hang up their research toolkit for a good while and muck-in on the less glamorous but far more impactful process of putting what we already know into practice. Unfortunately, this is the last thing a rational researcher in our current research environment would choose to do—or at least, they would do so at their own career peril—because it will not produce “research” that is “original” or “world-leading in terms of originality”, even if it is highly significant, community-enhancing, and done with the rigour of a true scholar.

Just think about that for a minute...

Surely—something needs to change.

Resources to raise community awareness of brain injury

The global brain injury awareness week ran from 10–16 March 2014, though in the US they extended this to the entire month of March. In recognition of this period, publisher Routledge made free access available to the collection of their top 25 most downloaded articles on brain injury that had been published in their journals in 2013. While brain month may have officially ended, these articles will continue to be available free until the end of April 2014. This is a great opportunity to continue to raise awareness of brain injury, and provides an opportunity to provide links to high quality resources to people who don’t normally have access to subscription-based peer reviewed journals.

The Routledge page on the collection also provides links and provides the full author details of these manuscripts, but I’ve summarised them here by title, grouping them by approximate topic to assist you to filter through them. All links go straight to the article in question, and again, access is free until the end of April 2014. Kudos to Routledge for making these available. For 2015, it’d be great to see something like this, and perhaps see a number of publishers select the papers that might be directly useful to people with brain injuries and their families, and provide open access to those papers perpetually.

Mood, anxiety and anger after brain injury
Feasibility and initial efficacy of a cognitive-behavioural group programme for managing anger and aggressiveness after traumatic brain injury.

Cognitive behavioural therapy for depression and anxiety in adults with acquired brain injury: What works for whom?

Diagnosis and treatment of an obsessive–compulsive disorder following traumatic brain injury: A single case and review of the literature.

Neuropsychological functioning of combat veterans with posttraumatic stress disorder and mild traumatic brain injury.

Staff-reported antecedents to aggression in a post-acute brain injury treatment programme: What are they and what implications do they have for treatment?

Interventions for cognition
Evaluation of neuropsychological rehabilitation following severe traumatic brain injury: A case report.

Effectiveness of an electronic cognitive aid in patients with acquired brain injury: A multicentre randomised parallel-group study.

The needs of family and support people
Depression and anxiety in parent versus spouse caregivers of adult patients with traumatic brain injury: A systematic review.

Impact of a family-focused intervention on self-concept after acquired brain injury.

Service communication with clients
The effect of varying diagnostic terminology within patient discharge information on expected mild traumatic brain injury outcome.

Rehabilitation outcomes
Post-traumatic growth, illness perceptions and coping in people with acquired brain injury.

Executive function, self-regulation and attribution in acquired brain injury: A scoping review.

Patients' experience of return to work rehabilitation following traumatic brain injury: A phenomenological study.

The relationship between alcohol and cognitive functioning following traumatic brain injury.

Diagnostic procedures
Never say never: Limitations of neuroimaging for communicating decisions after brain injury.

Prevalence of traumatic brain injury in juvenile offenders: A meta-analysis.

Cognitive functioning
Differences in MMPI-2 FBS and RBS scores in brain injury, probable malingering, and conversion disorder groups: A preliminary study.

Executive function outcomes of children with traumatic brain injury sustained before three years.

Examination of outcome after mild traumatic brain injury: The contribution of injury beliefs and Leventhal's Common Sense Model.

Psychometric assessment measures
Impaired National Adult Reading Test (NART) performance in traumatic brain injury.

Sustained attention following traumatic brain injury: Use of the Psychomotor Vigilance Task.

Utility of the Mild Brain Injury Atypical Symptoms Scale to detect symptom exaggeration: An analogue simulation study.

Memory functioning in individuals with traumatic brain injury: An examination of the Wechsler Memory Scale–Fourth Edition (WMS–IV).

Use of the structured descriptive assessment to identify possible functions of challenging behaviour exhibited by adults with brain injury.

Focusing on rehabilitation trajectories

I worked once in an inpatient post-acute rehabilitation service that had developed an excellent model of staggered discharge. As people receiving rehabilitation progressed through the program they were increasingly supported to move back into the community. This would start with short trips, followed as appropriate by whole days, gradually moving to overnight stays and then more than one day at a time, until eventually they were ready for full discharge. Their rehabilitation support would then be fully handed over to the community based team, who had begun to work with them as well during this process. At the time I joined this service, the funders had noted that bed occupancy rates were below their target. Pressure was being applied to the service to increase occupancy rates, which primarily required us to pull back from this best practice approach, to focus on providing rehabilitation that resulted in our patients being back in their hospital beds at the end of the day. (Yes, I did revert to patients deliberately.) As a result, we were back to actually delivering "what we were paid for". And that analysis wasn't wrong. We were paid to provide a certain number of "beds". But why? Leaving aside the obvious aspects of simplicity of billing and continuation of historical practices, why were we being paid to deliver "beds", or for those in other services not lucky enough to be paid for the empty ones as well, "bed days"?

These and related questions have captured much of my thinking in recent times, because there's no question that the way we fund services cannot help but drive, or at least constrain, the way we provide services. That could be fine, if our funding models could be tuned to directly reward best practice. I've had the opportunity to see a how the specifics of funding arrangements differ across funders and countries, though many have similarities. What is most typical, however, particularly for post-acute rehabilitation, is that they do not tend to enable, let alone reward, innovation and flexibility in the way services are delivered to clients.

In last month's editorial in Archives of Physical Medicine and Rehabilitation, Dr Gerben de Jong queries, Are we asking the right question about post-acute settings of care? He suggests that most of our current funding arrangements and research are focussed on the input and outcomes of contact with single rehabilitation providers at single points in a person's recovery journey, when we should instead be focussing on entire rehabilitation trajectories. I entirely agree. This is a topic that is of considerable interest me, to colleagues at Auckland University of Technology and elsewhere, and to our clinical partners. We continue to work through how we might evolve our models of service delivery and research to place the focus squarely on a person's full rehabilitation trajectory. The ideal would be to align funding models to ensure that the contingencies are in place to reward practice that supports a person's best long-term outcomes.

If you'd like to hear further about this topic, there's a podcast episode you might want to listen to. As Senior Media Editor of Archives, I interviewed Dr de Jong in the February podcast (11 minutes | podcast collection | podcast feed | mp3 file). And note that while the paper requires subscription access to the journal, the podcast episodes are always free.

DeJong, G. (2014). Are we asking the right question about postacute settings of care? Archives of Physical Medicine and Rehabilitation, 95(2), 218-221.

DeJong, G. (Interviewee), & Babbage, D. R. (Interviewer/Producer). (2014, February). Are we asking the right question about post-acute settings of care? Archives of Physical Medicine and Rehabilitation. [Audio podcast]. Retrieved from http://archives-pmr.org

Mobile, with an electric quality

Humans are tool users—one of our defining characteristics, I'm told. And I suspect that there's no tool that is more shaping our future right now than smartphones. This technology and the ways in which it might reduce disability for people with cognitive impairment is a major interest of mine, and of many people in this field, from researchers to clinicians, people with brain injuries and their family members. We all see great potential. However the backdrop is, disappointingly, decades of researcher activity in the area that has led with only a few exceptions to almost nothing in the way of actual shipping products that can be tested by other research teams, let alone actually used by people with brain injuries and their families. This must change, and in my view we must focus our attention squarely on mainstream technology to provide the solutions that we're looking for.

I'm not alone in thinking this. Chu et al. examine this issue in their most welcome paper, Cognitive support technologies for people with TBI: Current usage and challenges experienced published in Disability and Rehabilitation: Assistive Technology. With two focus groups of people with traumatic brain injuries and their caregivers, they discussed the experiences in the use of off-the-shelf technologies to support cognitive functioning. Their paper draws out relevant themes that are highly applicable to application, and provide a roadmap for further research. As the paper discusses, particularly interesting to grapple with is the opportunities and challenges inherent in the way that these technologies do not just enable an individual, but provide support for relationships in networks—for a family member to partner with a person with cognitive difficulties in a way that enables them to both contribute to the remembering process, for instance. I imagine such links could be experienced in may ways—as bridges, as tethers, as lifelines—and finding the right ways to deliver such services will clearly be important to maximise both functioning and acceptability. Their paper covers many other issues. It sharpens the questions rather than attempting to answer them.


If this is a topic that interests you, there are a number of upcoming opportunities to hear me speak about mobile technology in rehabilitation. I will be giving a research seminar as part of the Person Centred Research Seminar open meting series on Wednesday 28 August 2013. The talk is from 12:00–1:00pm in Room AB217 on the Auckland University of Technology North Shore campus in Auckland, New Zealand. The talk is open to the public and you'd be most warmly welcome to come along. Arrive in good time as you'll need to park in the nearby streets off campus. A map of the AUT North Shore campus is available here.
 
I'll also be talking in greater length about these issues in my upcoming professional development courses on Community Based Rehabilitation for Acquired Brain Injury, which I'm teaching with Prof. Barry Willer in Sydney on 3–6 September 2013 and in Auckland on 19–22 November 2013. These four day courses cover a wide range of evidence-based practical guidance for brain injury rehabilitation, and you'd be welcome to join us at either course. Registrations are open now.

 

A note on the Performance Based Research Fund

Dear John... oh how I hate to write
(hmm... maybe that's the crux of a PBRF-related issue right there)
Dear John, I must let you know tonight
That my love for teaching is gone
So I'm sending you this song
Tonight, I'm with another
You'd like him John, he's got a quality score over 6.5
So I'm sending you this letter
Dear John.

 
The research productivity of New Zealand universities is evaluated on a six yearly cycle, and this evaluation defines a substantial part of the research funding they will receive for the next six years from the Performance Based Research Fund (PBRF). Unlike most overseas equivalents, in New Zealand the unit of analysis is the individual academic, not the department.

As a Wellington-based academic who is not at Victoria University of Wellington, I've experienced a different side to the local news coverage of the PBRF results. What a strange beast PBRF is. Firstly, how great it is that The University of Auckland and Otago University (the only two New Zealand institutions with medical schools) weren't topping a list for once. And trust me that we have all heard that Victoria was the "top ranked" university. The average "quality" scores were, of course, the only metric that gets any attention. Did you know, however, that despite being "ranked sixth" on our "quality score", my current employer Massey University will actually receive 28% more PBRF funds than Victoria in 2013, based on the outcomes of the round? And that in the "quality evaluation" component specifically, Massey will receive 37% more funds than Victoria? How is that? Surely since the PBRF is about distributing funds to where they are deserved, they would be going to Victoria? No. PBRF funds are based on the total amount of good quality research being done by an institution, not just how good it is on average when divided by the number of portfolios submitted. It's like comparing a bespoke dining table that has been lovingly polished till it gleams, to another home with an entire good quality dining suite with chairs plus also a bedroom suite. If you do less, you might indeed make it better on average... but you've still done less in total. Apples and guavas.

Here's another angle I see on this. Did you also know that I would lower the quality score of any academic psychology department in the country that I joined? What a great feeling. This, despite having been the New Zealand principal investigator of a four year, three site international clinical trial, with colleagues in the US and Canada, funded by the US National Institute on Disability and Rehabilitation Research during the PBRF period? This, having supervised four doctorates to completion during the period, another since, and with five further doctoral candidates submitting for examination in the first half of 2013? And also coordinating Massey's Doctor of Clinical Psychology programme on our Wellington campus, with over 20 concurrent doctoral candidates throughout the past three years? And in addition to my other teaching, having also taught literally hundreds of frontline brain injury rehabilitation professionals, service managers and case managers through professional development courses during the PBRF period, meaning there may not be a brain injury rehabilitation service in the country that doesn't have staff that I have personally taught? Opportunities for genuinely health service-impacting research abound.

As it has been structured in New Zealand, the PBRF is fundamentally an exercise in individual behaviour change at a mass population level, an issue that I have a strong personal and professional interest in. Despite this, to my knowledge psychologists haven't been involved in the design or evaluation of the PBRF. In my opinion, the behaviours that have been observed at an institutional and individual academic level in the PBRF process were entirely predictable, and in many cases, unfortunate.

I'm moving to AUT University in Auckland in July. I'll be working for the next year as a Senior Lecturer in Clinical Rehabilitation in the School of Rehabilitation and Occupation Studies, in a position where I'll work with the Person Centred Research Centre. I'll be spending about half my time in translational brain injury rehabilitation research in partnership with ABI Rehabilitation (who are mostly funding the position) and other frontline brain injury rehabilitation services in Auckland and to some degree throughout New Zealand. The other half of my time I'll be back at AUT's north shore campus, primarily pursuing other neurorehabilitation research. I might be involved in supervising one or two graduate students, and I'll give a few guest lectures during the year possibly. But the primary focus will be research, working with a great team who are both highly productive and highly personable. I enjoy teaching (when I have the time to do it properly) and my teaching evaluations have frankly been far better than the evaluation I received from the Tertiary Education Commission through the PBRF process last week. But I am more passionate about impacting brain injury rehabilitation services in New Zealand and internationally through relevant, cutting edge research. So you could say that in my new job I've won second division PBRF Lotto. (First division would have been to get a Rutherford Discovery Fellowship.) I've found an exit that will resolve the workload strain that has been affecting my family life, while also positioning me for future research productivity. But most of my academic colleagues here have not seen greater space being created for them to do more research or better research through the PBRF process... just additional pressure to be more productive alongside all their existing commitments.

I have personal knowledge of, and deep respect for, some of the people who designed the PBRF. And I'm sure the PBRF was designed for excellent reasons. But something needs to change here. Because our university system isn't just broken—it's breaking people.

Season to taste

A holiday season is here. For those in the southern hemisphere like myself, the Christmas period marks not just that holiday but the conjoint start of the summer 'vacation' period (we don't call it that down here), which in northern climes is much more sensibly segregated to the other half of the year. But regardless of whether your Christmas may be white or sunburnt red, it is a time when many of us will be taking at least a few days away from the working environment. As I've been heading into this period, I started to think about the experience of people with disabilities in this holiday season, and particularly of those who are resident in rehabilitation services. We know that many people—like Dan—want to get home for Christmas and other holidays. It looks like Dan will, but many people won't be able to do this.

I set out therefore to see what the peer-reviewed literature could tell us on this topic. My attempts to find accumulated knowledge about people's experience of a holiday season spent in rehabilitation have been fairly unsuccessful, however. The only literature I could find on holiday-related environmental manipulations in health services was recent (highly quasi-) empirical evidence that indicated tinsel is harmful not just to pets but also to blood gas analyzers. The authors light-heartedly suggest Christmas decorations could be an impediment to patient care (or at least infrared touch screens, anyway). Yet we do know that small environmental differences can have important psychological implications. It was demonstrated in the 1970s that older adults given care of potted plants have a mortality advantage over their peers who have possession of a potted plant but aren't charged with its care—yes, we humans are less likely to die if a potted plant needs us. So tinsel would seem worth the risk. What else should we do beyond this, however, to make our rehabilitation services places that can be a positive place to spend the holidays? And what is the effect for clients when a highly valued holiday season fails to live up to previously cherished beliefs about the way it is 'supposed' to be?

No doubt all inpatient rehabilitation services make efforts towards a more festive environment during the holidays. I confess I have only a vague recall of the good efforts other staff made (probably mostly our nursing staff) when I worked as a Clinical Neuropsychologist at the Wolfson Neurorehabilitation Centre in London in the early naughties. To my shame I wasn't ever involved in festivities at the Wolfson on the holy day itself. My lack of anecdotes thus mirrors the paucity of published information on this topic—apparently, a complete absence? So it'd be good to hear some of your experiences. If you've got a heart warming, sobering, or enlightening story of the experience of neurodisability and the holiday season in rehabilitation services I'd like to hear it. Please email me if you're able to share not just with me but with others—being appropriately mindful of confidentiality. I'll distill what I can, and share thoughts back in an update in the next week or two to our community here. And as we reflect upon these issues, perhaps this can be the start to a deeper appreciation of how to provide more supportive holiday seasons in future years for any clients who have felt somewhat unseasonal in our services in the past.

Wishing you and yours a safe and peaceful holiday season.

Common sense research

"Common sense is not so common."—Voltaire (1764)

I was reminded of this widely re-quoted saying when reading Instilling a research culture in an applied clinical setting, recently published in Archives of Physical Medicine and Rehabilitation. In their paper, Dr Michael Jones and colleagues are clear and thorough in outlining a wide range of practical issues and considerations that arise in pursuing the goal of integrating research into our clinical services.

At one level it could be easy to read this paper lightly, and risk dismissing their suggestions as 'just common sense'. This would be a mistake. There are at least two reasons for this. Firstly, there is the comprehensiveness of their coverage of the many considerations that may arise. I think few clinical settings would have fully worked through implementation of every suggestion in this paper—so there are practical action points we can all take away from this paper. Secondly, one of the most helpful aspects of genuinely good advice is the way it doesn't recommend some alternatives that might also sound good on first hearing, but could ultimately lead to undesirable outcomes. I'm not claiming here to have a deep grasp on all of the things we shouldn't be doing, but my reading of this paper was that it offers good advice—both in what it does say, and what it does not.

While a few aspects of the paper speak specifically to the United States context (e.g., information about funding agencies), these should not detract for an international audience. And if you're a clinician or manager who wants to begin the process of bringing a research culture into your organization, this paper will provide a dozen ideas for where you could start.

Middle Earth is a dangerous place

The Lancet Neurology last week published Incidence of traumatic brain injury in New Zealand: a population-based study. Prof. Valery Feigin from AUT University, with Dr. Alice Theadom, Dr. Suzanne Barker-Collo, Dr. Nicola Starkey, Prof. Kathryn McPherson, and colleagues, reported on their impressive study that applied a fine sieve to an entire urban and rural catchment population of over 170,000 in the Waikato region of New Zealand for a one year period. This is the most thorough incidence study of traumatic brain injury that has been conducted—the first large scale population-based study covering both urban and rural areas. The study defines a new standard for future research in this area. (It metaphorically but also literally defines a standard—see the paper's Panel 2: Suggested criteria for population-based studies of traumatic brain injury incidence and outcomes, p. 10.)

The case identification methodology is impressive:
”We aimed to assure complete case ascertainment using multiple overlapping sources of information about all cases, admitted and not admitted to hospital, both fatal and non-fatal. This case ascertainment included the following: daily checks of all public hospitals and emergency departments (including surgery and neurosurgery departments) in the study region; monthly checks of CT and MRI records, hospital discharge registers for public and private hospitals in the wider Waikato region, family doctors, rehabilitation centres, and outpatient clinics; quarterly checks of coroner and autopsy records and rest homes; and a yearly check of ambulance services, the prison located within the study region, and the Accident Compensation Corporation (ACC) database. The ACC is a government-supported no- fault insurance agency that funds treatment and rehabilitation for all New Zealand residents with injuries. Cases were also identified from the national death register (we ascertained all death certificates with any mention of TBI). We made every effort to capture data for all individuals with mild TBI who were not admitted to hospital, by including those from family doctor practices providing direct referrals of new and suspected cases of TBI, and by doing checks of accident records of community health services, schools, and sport centres (within and just outside the catchment area), and through self-referrals (the study was widely advertised in the study area via television, newspaper articles, and newsletters and posters). Final checks for complete case ascertainment included reviewing computerised hospital separations data (deaths, discharge, and transfers) for public hospitals with ICD-10 S00-S09 codes for head injury (via the National Health Index number). All TBI cases were checked against existing cases in our TBI registry, to identify any duplicates. Remaining suspected cases (ie, cases for which the presence of TBI was not clear and needed to be verified) of TBI were cross checked with hospital discharge lists, hospital inpatient management records, lists of excluded cases (ie, TBI criteria not met, individuals who did not live in the study area at the time of injury), and lists from other sources (ie, schools, sports groups, rest homes).” (pp. 5-6).

By now it will not surprise you that their evaluation of case information was equally as thorough once participants were identified.

The study identified 1,369 traumatic brain injuries that occurred during the study year, including 71 moderate to severe injuries. This equates to an overall incidence of 790 cases per 100,000 person years. The authors note this was substantially higher than the incidence observed in other high income countries in Europe (47–453 cases), North America (51–618 cases) and is also higher than World Health Organisation estimates. It is possible that there is something different about New Zealand. However, given the rigorous methodology of this study the more likely outcome is that future research will confirm this incidence rate in other high income countries as well. The authors note that regrettably even higher incidence rates again are expected in lower income countries.

There are many ways in which good epidemiological data contributes to health service delivery. Good data guides injury prevention efforts. The kind of partnerships this study describes is an example to us all. Building and maintaining such networks for not just research but clinical purposes is a goal worthy of consideration in itself. Meanwhile, a key question that arises for rehabilitation services is: what is the outcome for the many, many mild (and moderate) injuries that are not being captured into the health system whatsoever, let alone receiving rehabilitation services? Do they spontaneously make a good recovery? How much worse off are they than those who receive services? Given the high numbers of people not accessing services, are there additional population-based interventions we could provide to mitigate at a distance some negative outcomes? On the whole, we don't have good answers to these questions. With this new high water mark set for how many injuries are occurring, the importance of these questions is further underscored.

A Case of Good Design

The most important skill in rehabilitation is evaluating the outcome of interventions with each individual. The most important knowledge in rehabilitation is how to systematically conduct such evaluations using a single case experimental design.

This may not be self-evident. No doubt there would be dissenters to these statements. And clearly, there are many other skills and much other knowledge that we would expect in any competent rehabilitation practitioner. For instance, the ability to skillfully assess capacity and impairment, and contextualize these in relation to community integration, is important. An awareness of evidence-based approaches to rehabilitation, and the ability to effectively bring knowledge from their particular professional background to bear on implementing these, is likewise important. Yet, as a field we're now well aware the limitations of randomized trials—in particular, that effects observed on a group basis in carefully selected sub-populations can only get us so far in knowing what will work for a specific person at a specific place and time. So while clinicians consider the available evidence, draw on knowledge and experience, and provide the best interventions we can, we need something more.

As an undergraduate psychology student I completed a course on psychophysics. In one laboratory session in an old house off the outskirts of campus, the professor demonstrated the next task. Placing a pair of heavy, sound-isolating headphones on, he spoke into a microphone. His voice was returned through the headphones and he spoke normally. A switch was then thrown, introducing a delay of a couple of seconds into the audio feedback he was receiving. Coherence dissolved. His speech became slurred, halting, irregular—barely interpretable at times. Removing the headphones, he explained that fluent human speech is predicated on the continuous feedback mechanism inherent in listening to our own voice. When this is disrupted, we are thrown off track. He noted that some are more effected by this than others, and that his wiring left him particularly susceptible. We were invited to experience derailment.

What matters most in rehabilitation is long term outcomes, yet in many cases clinicians receive only scant information about whether their interventions continued to work in the short term beyond discharge, let alone over the years beyond. In essence, in clinical practice our feedback loop often appears to be disrupted. Goal setting and outcome evaluation are now appropriately commonplace in rehabilitation services. However, pre- and post-treatment assessments in themselves are insufficient to systematically demonstrate that an intervention was the cause of any changes observed for the person receiving rehabilitation. We know that there are factors such as spontaneous recovery that are directly correlated with time, thus time spent in our rehabilitation services, and thus progress through treatment. To close the feedback loop, we need a way to rigorously demonstrate whether an intervention worked, with this specific person. Fortunately, that methodology already exists: the single case experimental design.

Work in this area is being pursued most vigorously by Prof. Robyn Tate at the University of Sydney and various colleagues in Australia. They have been iterating a measure to distinguish high quality, well controlled single case research designs from qualitative case studies and steps in between. Initially published in 2008 as the Single Case Experimental Design (SCED) scale (available for download from psychbite.com), while a revised version renamed the Risk of Bias in N-of-1 Trials (RoBiN-T) is in development, and adds further sophistication. An evolution of the PEDro-P scale for evaluating Randomized and Non-Randomized Controlled Trials, these scales were designed for evaluating the quality of N-of-1 trials. Beyond this, these scales provide an overview of the considerations in designing a good single case study from the outset. As a result, this work is compulsory reading for all my postgraduate students.

Implementing single case study experimental designs in clinical practice will not necessarily be straightforward. The Neuropsychological Rehabilitation SIG of the WFNR had their annual conference in Bergen, Norway in July 2012. Among much good work presented there, Dr Henk Eilander of the Department of Brain Injury, Huize Padua, Netherlands, presented their poster, Feasibility of single-case study designs to evaluate neuropsychiatric treatment of behaviour disorders after severe acquired brain injury. (Dr Eilander was good enough to allow the poster to be hosted on this site so you could access the full content of the poster.) A key conclusion: "Although case studies are feasible in a clinical setting with limited resources, the naturalistic character of this study as well as the inexperience with systematic research resulted in too much variability to be able to draw firm conclusions on the effects of the medication." My reflection from their valuable study—we clearly need to be devoting (even diverting) resources to develop these evaluation skills in our frontline clinical services.

Clinicians need powerful, simple-to-use tools so that it becomes an obvious choice to apply single case experimental design methodology in routine practice. At that same July conference, Robyn Tate, Dr. Michael Perdices and colleagues demonstrated their current work to develop an online training program to guide and accredit RoBiN-T raters. Through this online tool trainees are coached to correctly evaluate and rate single case experimental design studies, and compare their ratings to an expert consensus panel. While primarily designed to train people to be raters, in undertaking this training they learn the core skills to undertake such a study themselves. When available, I think this interactive tool will immediately become the best starting point to develop a grounding in single case experimental design.

Syndicate content