Tuesday, January 01, 2008

Never mind the quality

Governments today spend vast amounts of money on research. This research underpins policy decisions and therefore affects even more vast spending on areas - housing, healthcare, education - that affect us all, directly or indirectly. If you have ever wondered how it is that state decisions and spending are so often wrong and wasteful at best, you need to look at the nature of the research that underpins them. This is largely unknown to the public at large, but in the past forty years there has been a quiet revolution in the methods used by social scientists.

In the (genuine) science of chemistry, the terms "quantitative" and "qualitative" analysis have been in use for generations. Both are techniques for figuring out what the hell any given substance might be. With quantitative analysis, you break the compound under investigation down into its constituents and weigh (or otherwise measure) them carefully to get an idea of the chemical structure of the compound; with qualitative analysis you look at what the compound does - what its properties are. If it's a gas, does it extinguish a flame, or make the flame burn with a particular colour; does a glowing wooden splint burst into flames or stop glowing; what colour does litmus paper turn? By building up a profile of these properties, the identity of the compound or element can be ascertained. All very concrete, proper science.

These terms have now been adopted by social scientists. However, the meanings are a bit different to those in chemistry. In the social sciences, quantitative research means that old, boring dead-white-male stuff about gathering statistically significant data and analysing it rigorously. Qualitative analysis is much more exciting. Instead of surveying vast quantities of respondents (that boring old statistical significance), you collect from much smaller samples, but you go into more detail with each person (or piece of evidence) assessed. This is, apparently, quality.

According to the Association for Qualitative Research:

Qualitative research provides often an unparalleled understanding of the motivations behind human behaviour, desires and needs.
According to the Wikipedia entry:
During the 1970s and 1980s qualitative research began to be used in other disciplines, and became a dominant - or at least significant - type of research in the fields of women's studies, disability studies, education studies, social work studies, information studies, management studies, nursing service studies, human service studies, psychology, communication studies, and other.
The first time I became aware of the nature of qualitative research was about five years ago when I glanced through one of the textbooks of a girl I knew who was an undergraduate student. This explained that qualitative research (QR) was controversial, but that the correct approach for the social scientist was to refuse to engage in any discussion of the merits of QR. It was OK to explain why they wouldn't take part in such discussions, though. I can't remember the name of the book, or its authors. It dangled limply from my fingers for a few minutes as the ramifications of teaching this sort of attitude to undergraduates skipped round my brain, cackling wildly. A blanket refusal to discuss, even to discuss, research methods is not going to lead to effective research. But perhaps that's the point. The purpose of QR seems to be, almost openly, that of justifying decisions you have already reached for reasons of political doctrine.

I can't reference my second encounter either. I was listening to Radio 4's Today programme a few years ago when a professor of gender studies was being interviewed. She had published a report with findings, about the changed roles of men and women, that were surprising and didn't seem to accord with experience, so the Today presenter asked her how she had obtained her results. She had interviewed, in depth, a dozen people. Who were they? Her students. So a highly politicised professor of a highly politicised subject asked her highly politicised students what they thought about gender relations and came up with results that were... highly politicised. The presenter was speechless. We'll return to the way sample groups are chosen for QR studies later, but for the moment it's worth reflecting on the fact that a very well-informed BBC journalist was unaware of the way almost all social research is now conducted, indeed has been conducted for decades. He knew of the policy decisions that were being made, but not of the basis for the studies cited in support of these policies. No attempt has been made to keep this secret, so far as I can see, so the fact that this is little-known is itself astonishing.

Let's get to a specific example that can be properly cited. I'm going to use a piece of public policy orientated QR. I haven't cherry-picked it; it was chosen in a semi-random way - it's the first result I got from Google when I searched for qualitative research uk government policy. I have looked at a number of examples and can report that not once have I found a piece of research that seemed even at first glance to be in any way meaningful or reliable. And almost all present government policy, especially social policy, is justified by this sort of research.

The UK Department of Health's website carries a page about "complementary and alternative medicine" (CAM). This mentions some research it commissioned (emphasis added):
There is increasing public interest in, and use of, complementary and alternative medicine - particularly among patients with cancer. In response to the House of Lords Select Committee's desire to foster high-quality research into the CAM genre, and surveys which indicate that on average a third of patients with cancer had used some form of CAM, the Department provided funding for research in CAM (£1.3 million for the first round of a research capacity building scheme, and £324,000 for three qualitative research projects on CAM in the care of patients with cancer). This will help develop the evidence base for CAM in healthcare.
I have no idea what "a research capacity building scheme" might be, nor why it would cost £1.3M, but we can look at the published results of the £324,000 QR research. And this research was intended to inform actual policy and spending decisions (emphasis added):
These projects aim to contribute to a better understanding of the demand for CAM therapies and their effects on patient-centred outcomes among patients with cancer. They are focussed upon CAM therapies as an adjunct to conventional forms of treatment and in palliative/supportive care. The outputs will help to inform both the provision of integrated services within the NHS and the future research agenda for CAM in the cancer field.
Three studies were funded, and I have just taken the first one as listed on the DoH's website (emphasisi added):
Project reference no: 09/02
Lead researcher: Dr Philip Tovey
Principal Research Fellow
School of Healthcare Studies
Baines Wing, University of Leeds
[...]
Abstract

Complementary and alternative medicine (CAM) has achieved an exponential growth over the last two decades. There is strong evidence of its popularity amongst users and this trend is particularly pronounced in the area of cancer. However, it is also clear that services are often provided on an ad hoc basis, frequently by individual advocates rather than through co-ordinated strategy. Policy announcements in cancer care more broadly have recently emphasised a more 'rounded', and evidence based, approach. However, as the application of 'evidence' is not an objective process, effective health care planning on the role of CAM in cancer services will be dependent upon achieving an understanding of sites of integration and the motivations and perspectives of their stakeholders.

The aim of this project is to gain a clearer understanding of the demand for, use of, and decision making about, CAMs amongst cancer service users in order to inform the development of policy, practice and future research on CAM in cancer care. To achieve this we will address three sets of research questions that mirror and extend those detailed in the brief. These questions deal with issues such as the development of patient preferences, influences on decision making, expectations and interpretations of 'success', the role of varying forms of evidence etc. The project will complement (and be informed by) an existing ESRC/MRC funded project on CAM and cancer currently being led by the lead applicant.
Note that medicine - proper, evidence-based medicine, has been rebranded in the quotes that follow as "biomedicine". The report itself can be downloaded from here (pdf, 268k). From the opening pages (emphasis added):
Although still sporadic, CAM services are now being provided to cancer patients within some National Health Service (NHS) hospitals and NHS-affiliated hospices in the UK. These organisations are offering selected CAM therapies including (but not limited to) reiki, reflexology, aromatherapy, therapeutic massage, spiritual healing, acupuncture and hypnotherapy. However, the movement toward a more rounded, and less exclusively biomedical, approach to cancer care has been slow. Although there has been some movement towards more ‘integrative’ and ‘patient-centred’ practice in cancer care, a virtual stalemate in debate about ‘evidence’ and ‘efficacy’ has prevented significant progress being made.

Many biomedical clinicians and health researchers argue for the development of a biomedical-type evidence base before there is even consideration of the delivery of CAM therapies to NHS cancer patients (e.g. Ernst, 2001). ‘Unproven’ therapies, it is argued, should not be offered to patients (House of Lords, 2000) due to risks of potential harm to patients and the wasting of NHS resources. However, an increasingly popular perspective amongst CAM practitioners, patient groups and some social scientists alike, is that, professional gate-keeping, paradigmatic incommensurability and restrictive understandings of ‘evidence’ are the real barriers to state funding of CAM for cancer patients (e.g. Borgerson, 2005; Giordano et al, 2005). Perhaps more importantly, cancer patients themselves are increasingly choosing treatments regardless of their clinical ‘efficacy’ (e.g. Lewith et al, 2002; Rees et al., 2000); a pattern that suggests that the pursuits of evidence-based medicine (EBM) in cancer care may have only limited resonance with many patients’ experiences of disease and treatment processes.
I shall have to remember the phrase "paradigmatic incommensurability" for the next time I want to end a conversation. But the important point here, I think, is the disdain for evidence, efficacy and cure. They go further:
Some social scientists argue that CAM use, as such, represents a significant shift in conceptions of disease and selfhood (e.g. Doel & Segrott, 2003; Sointu, 2006; Sointu, 2006a). In particular, the notion of ‘wellbeing’ has emerged recently as a potentially useful concept for characterising what CAM offers to the individual. Departing from biomedical notions of being ‘cured’, ‘healthy’ or ‘disease-free’, wellbeing encapsulates notions of authenticity, recognition and self-determination; restructuring ‘health’ as a subjective and individualised process (Bishop & Yardley, 2004; Sointu, 2005; Sointu, 2005a).
There is also a disdain for conventional medical practitioners:
There is considerable evidence that use of CAM is not discussed openly between patients and their physicians and that CAM-related issues can create problematic dynamics within medical consultations (e.g. Adler & Fosket, 1999; Crock, Jarjoura, Polen & Rutecki, 1999; Mackenzie, Parkinson, Lakhani & Pannekoet, 1999). Tasaki, Maskarinec, Shumay, Tatsumura & Kakai (2002), for example, reported three main reasons for non-disclosure of CAM use by cancer patients to their physician: physicians’ indifference or opposition toward CAM use; physicians’ emphasis on scientific evidence; and, patients’ anticipation of a negative response from their physician. In their study, cancer patients reported physicians’ discouraging them to initiate or to continue a discussion about CAM.
The preference for "evidence-based" medicine seen in many doctors is "problematic".

How did they choose their interviewees? This is one of the controversial aspects of QR, and this study shows why:
The final sample included 80 cancer patients with a good distribution of ages from 20 to 87 and representation from all major cancer types and stages of disease. A third of the participants were male whereas two thirds were female. Given the high proportion of female CAM users reported in previous studies we expected to get more female participants. The majority of the interviewees were CAM users although around 15 percent were non-CAM users to gain some insight into perceptions from a range of usage levels. We deliberately sampled to include high numbers of CAM users to gain insight into experiences of CAM and biomedicine in relation to evidence. Thus, this is a study of predominantly CAM users who are cancer patients rather than a representative sample of all cancer patients; this recognition informed our interpretation of the results.
So in a study of patient preferences with respect to CAM they chose a sample in which 85% of the people were those who had exhibited a strong preference for CAM by having already opted to use it. How do you compensate for that in your results? The answer, it seems, is that you don't. I could quote almost the entire report for its WTF value, but you can download it and read it yourself if you like. The valid point that some patients see conventional medicine as depersonalised is made. But is the role of the nationalised health system one of providing treatment that is known to be effective or that of operating according to the patients' wishes, even if these will be ineffective? The report is in no doubt it is the latter. Patients, not doctors, should dictate health policy.

Of course, there is one way this could happen: a free market in health care provision. Then patients would be able to vote with their feet. I strongly doubt that many would gravitate to, and especially spend their own money on, significant CAM, but in a nationalised system this doesn't matter. By concentrating on people with a preference for CAM, the report has been able to make the tacit case that this is representative. Oh, and more research is needed:
The social research agenda that should run alongside trials work is substantial. There is, for instance, a need for more in depth work looking at specific patient groups in order to develop services that reflect diversity. For instance, the PI on this project (Tovey) is currently working with colleagues at the Institute for women’s health on developing such targeted work for gynaecological cancer. Similarly the differentiation amongst biomedical practitioners (within medicine and between medicine and nursing) requires further research. The motivations and objectives of those concerned need to be unpacked in order to provide a solid base for reform of practice.
Here's the analysis section of the report, point by point:
There is a need for a greater emphasis on the differing needs of patient during different illness and treatment stages in cancer policy and clinical practice;
Syntactically correct, entirely without meaning: a splendidly postmodern opening gambit.
• Inter-professional relations between CAM and the biomedical community are still at times highly problematic, and more work needs to be done on promoting dialogue about the value of, and potential benefits for individual patients, of CAM approaches to cancer care;
Translation: medical practitioners disagree with me, and must be brought to heel. "Dialogue" has been redefined as the process whereby other people are brought to share, or at least accede to, your own views.
The production of a biomedical-type evidence base for CAM, as so often espoused, may not actually have a significant influence on cancer patients’ decisions to use, and views of, CAM;
Opinions do not require evidence to back them up.
The development of cancer policy must acknowledge, and reflect, the types of complex issues that cancer patients face in decision making and treatment processes including the multifaceted nature of the notions of ‘evidence’ and ‘effectiveness’;
CAM must be included in therapies, whether or not there is any evidence that it is in any way effective.
At present, access to information about CAM is largely ad hoc and informal; patients desire for information and dialogue about CAM from a range of stakeholders and cancer policy needs to reflect this.
CAM should be fully institutionalised within the NHS. Oh, and:
The findings of the research also highlight the need for further research in the area...
In what way did the non-random nature of the sample "[inform the] interpretation of the results"? I can see no adjustment whatsoever to the sampling bias.




In a health service that cannot afford to carry out vital procedures in a timely way, that has to ration or deny drugs that would stave off cancers and dementia, increasing amounts of money are being spent on therapies, and on research into therapies, that it is acknowledged do not cure or alleviate illness.

The justification for this includes research carried out into the preferences of people who already prefer this to be the case. And guess what - the research discovers that they prefer this to be the case. Therefore it remains the case, and the provision of alternative therapies is set to continue to expand. When challenged, policy makers will cite research.

This is the nature of the research they cite.

3 comments:

Anonymous said...

The universities are full of subjects that have proved to be intellectual dead-ends and should be scrapped.

flashgordonnz said...

I started to laugh, but then I remembered under-funded A&E facilities. So I wept.

Anonymous said...

Don't forget that there is good and bad practice in most disciplines. Unfortunately public health researchers have grasped some of the ideas of qualitative research but often have no disciplinary background to inform the methods they adopt. This (and the poor critical faculties possessed by some researchers) means that the studies can become little more than rubbish journalism.

While that is true, high-quality work in the qualitative social sciences does exist, and to issue a blanket hurumph about the entire lot is a pity. There really are research questions you cannot answer using statistics - to imply otherwise would be idiotic.