{"id":2021,"date":"2018-11-15T18:23:44","date_gmt":"2018-11-15T23:23:44","guid":{"rendered":"https:\/\/linacolucci.com\/?p=2021"},"modified":"2018-11-15T18:59:31","modified_gmt":"2018-11-15T23:59:31","slug":"the-key-feedback-loop","status":"publish","type":"post","link":"https:\/\/linacolucci.com\/2018\/11\/the-key-feedback-loop\/","title":{"rendered":"The Key Feedback Loop That’s Missing in Medicine"},"content":{"rendered":"
#EngineerInTheWards is a series based on my experiences and reflections during hospital rotations. I completed my\u00a0PhD at the Harvard-MIT HST Program<\/a>\u00a0where I took approximately the first year of medical school coursework at Harvard and 3 months of clinical rotations, in addition to engineering coursework at MIT.<\/em><\/p>\n <\/p>\n I started my hospital rotations thinking that the healthcare system was dysfunctional. I finished my hospital rotations thinking that the healthcare system was great. Here\u2019s what happened in between and why it\u2019s one of the most fundamental problems in healthcare.<\/p>\n <\/p>\n \u2022\u00a0\u2022\u00a0\u2022<\/p>\n As a patient, I was acutely aware that there are no built-in feedback loops in medicine between the patient and doctor (other than an occasional\u00a0 lawsuit). When a patient never shows up to a doctor again<\/strong>, it could be because (A<\/strong>) the doctor fixed the patient\u2019s problem or (B<\/strong>) the patient never wants to see that doctor again. Neither the doctor, the hospital, nor the next patient will ever know which reason it is.<\/p>\n For example:<\/p>\n The oral\u00a0and maxillofacial surgeon that inadvertently dislocated a disc in my jaw doesn\u2019t know that it still hurts to eat 4 years later.<\/p>\n The podiatrist that made me pay $300 for a custom shoe insole doesn\u2019t know that my foot hurts just as much as when I first went to him 1 year ago.<\/p>\n The specialist who said there was no way to relieve my mom\u2019s pain doesn\u2019t know that she actually just needed to stop eating dairy.<\/p>\n The primary care doctor that said our family friend just needed to learn to control her anxiety attacks doesn’t know that she actually had aortic stenosis.<\/p>\n These clinicians will continue practicing the way they\u2019ve always been practicing<\/strong> – continuing to believe that the insole will get their patients pain free, continuing to not realize that food allergies can cause generalized inflammation in someone\u2019s body, and continuing to misdiagnose female patients with panic attacks.<\/p>\n Personally, I don\u2019t want to go back to the doctors that did not solve my problems during the first 1-3 visits because (1) I don\u2019t believe they actually know how to solve my issues, and (2) it takes time, effort, and money to arrange each appointment. It is a lot of work to be a patient.<\/strong><\/p>\n \u2022\u00a0\u2022\u00a0\u2022<\/p>\n During my hospital rotations, my frame of reference shifted. I was now evaluating the healthcare system based on my experiences as a member of the care team. My evaluation window started the moment a patient appeared in front of me to the moment I pressed \u201cprint\u201d on the discharge paperwork.<\/p>\n As I learned how to function on a care team I got positive reinforcement from my superiors (residents and attending physicians). I was applauded for ordering the right blood tests, suggesting the right medications, and offering up the right treatment plan. Patients came in with problems, we \u201cfixed\u201d them (at least according to medical guidelines), and then moved on to the next person.<\/strong><\/p>\n After 3 months in this environment, I left thinking that the healthcare system was great1<\/sup>.<\/p>\n I didn\u2019t have time to notice the hours our patients spent waiting before we saw them, or the confusion they experienced when trying to make sense of our recovery instructions at home, or they inevitable side-effects they would experience in the months to come. Patient feedback was never part of my feedback loop while on the care team2<\/sup>.\u00a0<\/p>\n <\/a><\/p>\n \u2022\u00a0\u2022\u00a0\u2022<\/p>\n A functional healthcare system should want<\/em> to receive feedback about whether doctors are effective at solving their patients\u2019 problems. Yet unfortunately the strongest feedback channels for physicians are: (1<\/strong>) meeting their hospital\/employer\u2019s patient quotas and (2<\/strong>) adhering to medical guidelines.<\/p>\n The reality is that doctors do not actually know if their recommendations are solving patients\u2019 problems. Doctors assume that when a patient never shows up again it is because the issue was \u201cfixed.\u201d<\/strong> I would argue it often means the patient doesn\u2019t want to go through the effort of setting up another appointment that they don\u2019t think will yield any benefit.<\/p>\n <\/a><\/p>\n We need to find a way to have real patient feedback for doctors so that doctors learn what works and doesn\u2019t work for their patients3<\/sup>. And we need to share these aggregated patient outcomes from each doctor with the public so that patients are able to find the right doctor for their specific issues.<\/p>\n ZocDoc and other doctor-finding services are simply specialty and insurance-matching platforms. These sites display a lot of useless information just because it is publicly available<\/strong>. I don\u2019t care what professional societies my doctor is a part of, if they have a waiting room, or, frankly, what the vast majority of the patient reviews say. Here is one useless patient review as an example:<\/p>\n \u201cThey always dress so professionally here, the entire staff obviously makes an effort to look their best and maintain a very professional appearance\u201d<\/em><\/p>\n If a doctor has a proven track record of solving my particular problem for my particular demographic, I will go to them even if they wear jean cutoffs, have a waitlist of 1 year, and make me stand outside in the cold before an appointment.<\/p>\n We need hard facts by which to judge clinicians<\/strong>. And clinicians need hard facts to learn how to adapt if they are not performing well compared to their peers. For any surgeon or interventionalist, we need to share how many of each procedure they do each year along with patient demographics and complication numbers for each type of procedure.<\/p>\n For non-interventionalists, automated text\/email\/phone-based surveys4<\/sup> after a visit can perhaps get at the same information:<\/p>\n We have recorded that you came in to discuss [x] with your doctor. Is this correct?<\/p>\n Has this issue resolved yet?<\/p>\n Did you follow-through with [a, b, c] recommended by your physician?<\/p>\n If you\u2019ve followed through, do you think you need to come back for a follow-up visit because the current treatment plan does not seem to be working?<\/p>\n If you haven\u2019t followed through, do you need help with it? \u00a0<\/p>\n We know that not all doctors are equal. Mortality rate for different surgeons<\/a> can be wildly different. Survival age for cystic fibrosis patients<\/a> depending on their care team can vary by more than a decade. And the examples go on and on. By not sharing physician performance stats<\/strong>5<\/sup>, we prevent patients from choosing the doctor that is going to give them the best chance of success.<\/strong><\/p>\n