My Geneia colleague and host-extraordinaire Andrea Durkin recently invited me to discuss data science, healthcare, COVID-19, and artificial intelligence (AI) model interpretability on her podcast Geneia Conversations. Read some highlights below or, even better, listen to the whole discussion!
What is the Geneia Data Intelligence (GDI) Lab and what’s your role?
All of us in the GDI lab are data scientists, and we focus on reducing healthcare costs and improving patient care. Our health is really complex: scientists and doctors have been trying to understand it better for centuries. It used to be that our big problem was having so little information about a patient’s health. Treating a patient was like trying see the picture on a puzzle when you have only 1% of the pieces. Now, technology is getting better, and we can now see maybe 5% of the pieces. But even just 5% is such an overwhelming amount of information, that it takes a huge amount of work and effort just to make any sense out of it. That’s what we data scientists do: we use statistics and models to find the important information in this confusing, puzzling mess, and we get that information straight to the doctors, to the hospitals, to the insurance plans, so that they can make better decisions for patients.
What kind of models have you been working on?
I’ve worked on several topics while here at Geneia. Probably the one that has most affected all our lives is COVID-19. When the pandemic started, it was important for us to quickly get a handle on who was at the highest risk from the virus, so we rapidly put together a model to do that. And then as more and better information came available, we iterated and improved the model, so that its predictions became more refined.
What is model interpretability or explainability?
The models that we work with can be really complicated. And when one of these models makes a prediction, we want to know why. A model might answer the question: Is this person at high risk for hospitalization with COVID-19? Or, is this person at high risk for a readmission within 30 days? If so, why is that? Is it because of their age? Is it because they have heart disease or diabetes? Or is it something else? Interpretability lets us answer those questions.
Can you share an example?
Sure. A simple example is from my own work, where I created a model to predict readmissions. So, the situation is, a patient has been discharged from the hospital. Will that person be re-admitted to the hospital within the next 30 days? If so, then that’s bad and we want to try to prevent that. So I created a model to predict that. And it turns out that the most important characteristic in predicting whether someone will be re-admitted… is whether they have a history of readmissions. If they do have a history of readmissions, then it’s more likely that they will be readmitted again. And I know that because I was able to interpret the model. So that interpretation gives us more confidence that the model’s predictions are based on something real; it’s not a mistake.
How does Geneia Data Intelligence (GDI) lab address interpretability?
There has been an explosion of research in recent years about how to interpret models, and we’ve been working hard to educate ourselves and keep up with all the new methods that are being created. One important new method is SHAP. SHAP is useful because it gives us really consistent results. For example, a lot of methods give inconsistent explanations for individuals versus groups. But SHAP can explain the results for one person. And it can also explain results for a group of people. If we have 1000 people, we can simply add up all the SHAP values for the 1000 people to get the explanation for the entire group. So that consistency is really helpful. A lot of other methods don’t have that consistency, and so this is a really big benefit of using this particular method that we call SHAP.
Another useful thing about SHAP is that it applies to many different artificial intelligence (AI) techniques. We data scientists might choose to solve a problem with a regression, or a decision tree ensemble, or a neural network, But we can use SHAP in all of those situations. That way, if you’re using our model, you don’t have to learn about a different explanation for each kind of model that we might use. You don’t have to know about what a decision tree is or what a neural network is. You can have one consistent way to understand our models, no matter which kind of model we use.
What’s not wrong with healthcare?
For all our challenges, we here in the United States are at the forefront of healthcare innovation — not just in creating new treatments and technologies, but also in figuring out how to control costs. While it’s true that we as a country have probably done the worst job at controlling healthcare costs, it’s a problem that has affected all countries with advanced healthcare systems. Thanks partly to a lot of the good ideas that went into the Affordable Care Act, we’ve been at the forefront of experimenting with improved ways to deliver healthcare at lower cost.
Our real challenge is in translating that new knowledge into practice. We’re incredible at developing fancy, new technologies and treatments, but we’re terrible at making sure that everyone can get them. We’ve under-invested in public health and healthcare delivery for 40 years, and it shows — most vividly in our failure to control the pandemic.
Remember, if you want to know more, the entire discussion is here!