We’d like to think that medicine is entirely evidence-based, but it’s not. Some of what we do has a great amount of evidence behind it, but sometimes the evidence is a little more shaky. Sometimes there’s practically no evidence. As a paramedic put it to me in an ACLS class one day, aside from this short list of drugs, as far as the evidence goes, we could put mayonnaise in that IV and there’d be a similar amount of research to support it.
I mean, we try our best to operate under what we think to be the best. We read consensus statements from working groups, we go by clinical practice guidelines published by a committee of experts, and when we aren’t sure, we go by conjecture that we base on previous clinical experience and whatever tangentially associated evidence we happen to have packed away in our brains. We usually get pretty close, but the truth is that we could get a lot better. A lot of the time, we basically run on nothing.
In truth, we don’t really run on “nothing” — clinical experience isn’t irrelevant, and recommendations from those with more experience than us, while not exactly evidence, isn’t exactly nothing. But there’s lots of situations, especially with the sort-of new-frontier type medicine, where the answer to the question of what the best thing to do is honestly that we don’t know.
So imagine my joy when I find a document on a subject that I know has gotten little research. Somebody (in this case the Canadian Thoracic Society) has compiled all of the best available information into one document and made recommendations based on it. I skipped off to the printer (sorry, trees) and pulled out my highlighter and began swiping away at passages I found most relevant. I got two swipes and three paragraphs into the actual recommendations before I found this gem:
“Unfortunately, each of these techniques suffers from the lack of well-designed prospective trials. As such, recommendations were informed by observational studies and professional consensus.”
Professional consensus and observational studies. So clinical experience (times a lot of clinicians) plus tangentially related evidence (with a small sample size and no controlled conditions) are literally the best evidence we have. Like I said, it’s not exactly nothing, but when you consider the way things tend to fall apart under close scrutiny in this field, it’s about as close to nothing as you can get while still having a half-assed idea what you’re doing.
In school they teach us to operate on this version of ‘nothing’. They teach us models and give us context and try and assist us in developing the skills necessary to work outside the textbook. Very few patients are cookie-cutter. We operate like that a little bit when we’re brand new, but as we gain experience we learn things that are unteachable. We learn how much wiggle room we really have — that it’s not necessarily the end of the world if we try something and it doesn’t work. We learn that the limits we were given are margins of safety, and that there’s a lot of space between the margins.
Enter the patient. I’m lucky, I say — so many of my patients are heavily sedated and won’t remember what I did to them — I have a sort of list of things I can try in order to achieve the result that I want. It’s a common refrain in health care that patients don’t read textbooks, and it’s true. It’s exactly because no two patients are exactly alike that no two treatments are exactly alike. It’s the nature of what I do that I intervene and look for a particular patient response; when I don’t get the response I want, I change my intervention. In this way each patient is its own isolated experimental model: a kind of so-called n-of-1 trial.
I think if most people knew how much of my job (especially with regards to ventilating people) is “well, let’s try it and see what happens,” they’d be a little concerned. The truth is that that’s the essence of a lot of medicine. The beauty of ventilating someone with a piece of equipment that retails for more than a small condominium is that I get the benefit of immediate information about how my experiment is working. I don’t have to wait for days for antibiotics to work or steroids to kick in. I don’t even have to wait the minutes it can take for sedation to kick in. I will usually know in under 5 minutes if what I want to do is going to work or not, and because things respond so fast, unless I do something exceedingly stupid it’s actually very difficult for me to harm somebody with an experiment of this kind.
Sometimes I get another kind of immediate data: sometimes my patients are awake and talking. The home ventilator stuff I linked up there is so interesting precisely because of that. 99% of the time when I ventilate a patient, they’re out cold and I’m left to do the guesswork based on some animations and a few fluctuating numbers on an LCD screen. When the patient’s awake, they can tell me what they want and how they feel, and if they’re articulate about it and it’s a problem I can solve, in a way this gives me far more fine-grained control over what I end up doing.
He asks me questions, and the answer I have is an honest one: we don’t know, there’s not a lot of research to support this, we don’t have a lot of good models for what we’re doing, it depends on how you respond. It sounds terrifying to somebody who wants the patriarchal model of medicine to hand down a pronouncement from on high about what their therapy will entail. Sometimes we do that, but we try not to. Care plans shouldn’t be about what I think is best for you. I don’t live in your body for 24 hours a day and once you walk out those doors the life you live is your own. If I’m going to come up with something that you’re going to be able to live with day in and day out, it’s far better if we can come up with something together.
It’s easy 99% of the time with my heavily sedated patients. The tube comes out, they come to, (sometimes not in that order,) and what I’ve done is something that was profoundly uncomfortable and yet saved their life. They don’t have to live with the therapy on a day-in-day-out basis — it was a short term thing and once over, it can be forgotten.
With someone who’s vented at home, it’s an entirely different story. Their life is my therapy and without it their life would be shortened considerably. They can’t ignore what I’m doing if it’s uncomfortable and they can’t forget about it because it’s ever present. I need his feedback to do my job properly: the equipment I use in the home is 1/10th the sophistication of the equipment I use in the ICU. I lose my raw data and get subjective information and I have to glean a course of action from that.
The benefit to this is his subjective response is just as quick oftentimes. He knows his body and I can trust that. He gives me far better data than I can get off of an LCD screen and it allows me to individualize his vent settings in a way that I would never dream of doing with an acutely ill patient. Admittedly it helps that most chronic ventilator patients have healthy lungs, and I’m using settings that are far gentler than anything I’d use on someone really sick, but an experiment is an experiment and it can still go awry.
It depends heavily on the patient too. Some people become very uncomfortable if they think you don’t know. Some people are anxious and when you say “we’ll have to try it and see how it goes” they hear you say that you’re not confident in what you’re doing. (Those are times you have to be part salesman.) But most people actually respond really well to an authentic voice, when you tell them we just don’t know and in a lot of ways we have no way of knowing. I tried hard to be honest without being wishy-washy, and I think they appreciated my lack of fatalism and my willingness to be flexible.
Even inasmuch as the n of 1 is a terrible way to conduct scientific research, it’s a great way to conduct patient care. We are all individuals and what works for one of us will likely not work for the next, and applying cookie-cutter approaches doesn’t always work. At some point a really good clinician will be willing to go beyond the textbook, to look at the data they have, to try new things and to see how they work out. The ability to think critically in this way is what separates out those who really know what they’re doing from those who use the paint-by-numbers or recipe-book method of healthcare, and not just for their critical thinking skills. Some of the most valuable things to come out of such an experiment is the experience of having done it in the first place, of learning those things that are unteachable. We shouldn’t fear experimentation. It’s how we become truly great at what we do.