
For many years, people living in Lyme, Connecticut, were plagued by mysterious flu-like symptoms, rashes and joint pains. Patients were convinced that their symptoms had something to do with the deer that roamed the nearby woods and the ticks they frequently found clinging to their clothes. But because the symptoms were so ill-defined and there was no treatment, physicians dismissed the symptoms as psychosomatic.
Then in 1975, alarmed by the exceptionally high rates of juvenile arthritis in Lyme, the Connecticut health department decided to investigate. It took seven years but eventually, in 1982, they identified a spiral-shaped bacterium in the midgut of deer ticks that was also present in the blood of sick people. All it took to spark the disorder was for an infected tick to latch on to an unsuspecting victim and inject the bacterium under their skin.
Today, there are two tests for Lyme disease and most cases respond to treatment with a course of antibiotics. But, as with other hard-to-define syndromes, that is not the end of the story. This is because a lab test alone is not sufficient to rule Lyme disease in or out. Overly sensitive tests can produce false positives, while poorly calibrated tests can produce false negatives. For the test to be considered diagnostic, the patient must also have a high pre-test probability of having Lyme disease. But that is a judgment that can only be made by a doctor who has taken a detailed case history, established that the patient has symptoms typical of others, and that they have been in a Lyme disease endemic region. In other words, Lyme disease, like any other medical condition, has a large subjective element and diagnosis is as much an art as a science.
Physicians have been cognisant of this paradox since the time of Hippocrates – indeed, the history of medicine can be seen as an effort to put diagnosis on a sounder scientific footing by eliciting the role that microbes, environmental pollutants, genes and psycho-social stressors play in disease. But as medicine has become more sophisticated and we have developed more sensitive tests and treatments, so more and more people have acquired diagnostic labels. As Suzanne O’Sullivan, who has been a consultant in neurology since 2004, argues in The Age of Diagnosis, this can be a good thing if the diagnosis leads to greater understanding and improved treatments, but not if the diagnosis is not as definitive as we think and risks medicalising people without long-term benefits to their health.
For example, as many as half a million people in Australia are reported to have Lyme disease, even though the Lyme-carrying ticks are not present in Australia. Worldwide, the condition has an estimated 85% overdiagnosis rate. Or consider autism. Fifty years ago, autism was said to affect four in 10,000 people. Today, the worldwide prevalence is one in 100 and in the UK diagnoses of autism increased 787% between 1998 and 2018. Similarly, the proportion of attention deficit hyperactivity disorder (ADHD) diagnoses, a term first coined in 1987 to describe fidgety children who had trouble concentrating, doubled in boys and tripled in girls between 2000 and 2018 and today the diagnosis is also increasingly being applied to adults.
But what if these reported increases do not reflect an actual increase in the prevalence of these conditions but are examples of “diagnosis creep”? As O’Sullivan writes: “It could be that borderline medical problems are becoming ironclad diagnoses and that normal differences are being pathologised… In other words: we are not getting sicker – we are attributing more to sickness.”
This is not only a problem for those whose behaviours were once considered within the normal range and are now being pathologised. It is also a problem for those who, thanks to more accurate gene testing technologies, are being told they may be at risk of cancers that they may never go on to develop. Is it better to be forewarned you carry a BRAC gene mutation that puts you at a 40-85% risk of breast cancer and be faced with the dilemma of whether to have a preventive mastectomy, or would it be better not to burden yourself with a traumatic decision that may be unnecessary? And what of projects such as the UK’s newborn screening programme, which aim to test apparently healthy babies to predict future serious health conditions so they can be treated at the earliest stage possible or even before symptoms start?
These considerations become increasingly important as medicine gets better at diagnosing previously diffuse conditions, without necessarily being able to offer a cure or treatment that results in real-life improvements. Nobody wants to tell a child they have a high risk of dementia in the very distant future or burden them with a diagnosis of Huntington’s disease – an untreatable genetic condition that usually becomes apparent in your 40s. That is why screening for childhood-onset genetic diseases is being limited to those who have predictable genetics and where early treatment is likely to make a difference to health and longevity. “The aim is to empower people to access early intervention to protect their child’s future health, not to weigh a child down with an inescapable destiny,” O’Sullivan writes.
In the case of conditions such as autism and ADHD, however, O’Sullivan fears it may already be too late. The Diagnostic and Statistical Manual of Mental Disorders (DSM) used to define ADHD as a childhood condition that disappeared in adolescence, she points out. But with the introduction of the term attention deficit disorder (ADD) in the 1980 edition of the DSM, both ADD and ADHD were extended to older and older age groups. The result today is that ADHD (the more commonly used term) is no longer something you are expected to grow out of. Instead, the label has become central to many people’s identities. This would not be an issue if the diagnosis led to palpable life improvements but O’Sullivan reports that many of the people with ADHD she spoke to had dropped out of work and education and become estranged from their friends. “I saw a worrying gap between the perceived benefit of being diagnosed and any actual improvements in quality of life,” she writes.
O’Sullivan concludes The Age of Diagnosis by reminding us that a physician’s first duty is to “do no harm”. Her principal concern is that we have become so enamoured of the latest technologies and cutting-edge diagnoses that we haven’t taken time to properly weigh their potential harms. That is why before reaching for the latest technological solution, she argues, doctors need to listen closely to their patients. By the same token, patients should recognise that good health is not a constant, that it is unrealistic to expect doctors to provide a solution for all that ails them, and that medicine is not a sticking plaster for every behavioural and social problem.
• The Age of Diagnosis: Sickness, Health and Why Medicine Has Gone Too Far by Suzanne O’Sullivan is published by Hodder & Stoughton (£22). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply
