
Technology is streamlining business operations faster and faster—but how might it change health outcomes for patients? Dr. Sreeram Mullankandy, director of product management and clinical quality at Elumina, says artificial intelligence (AI) and other technological innovations can tackle social determinants of health (SDoH) in homecare patients. Elumina, which is based in Ohio, is an agency that provides home health care, personal care and home medical equipment and has also created an electronic health record system; Mullankandy has worked on remote monitoring technology and developed AI systems for home-based care.
HOMECARE: What have you learned about social determinants of health, especially as it relates to providing care in the home?
MULLANKANDY: This was kind of a late revelation for me: 80% of health outcomes are actually led by social determinants of health. Broadly, you can classify them into transportation, housing and food safety. Clinical care accounts to 20%—that is, access to care and quality of care. Other factors such as health behaviors account for another 30%—that is, tobacco use, diet and exercise, alcohol, drug use, sexual activity. All of those account for 30%. Social and economic factors like education, employment, income, family, social support, community safety, all of those are another 40%. Physical environments (air and water, quality housing) account for 10%. So all together we are focusing maybe 80% of our effort on what brings out maybe 20% of the outcome.
You cannot change (these factors) very quickly; it’s a slow process. For example, to address the housing issue, you have to build partnerships and build your network with a bunch of housing organizations, state organizations, national organizations and coordinate among them to make sure that some of the patients who cannot access safe housing can go ahead and avail these resources. So these are not sexy or very interesting things to do, but in terms of outcomes, they matter quite a lot.
HOMECARE: Are there things about in-home care that give clinicians and caregivers special insight?
MULLANKANDY: Oh, yes! Let me give you an example: We have a large load of patients every day, dozens and dozens if not hundreds being visited every day. It’s a huge burden for our clinical team to actually figure out which patients are the most adverse so that we can focus on them more acutely, and which ones are improving and can go forward with normal care and who needs a heightened level of care. It takes a lot of time and is nearly impossible for everyone to do it. So we implemented an AI algorithm; actually, two or three combined together with some generative AI as well. During the night, it will automatically go through all the patient profiles that are scheduled for the next day. It will sift through 100 patients (or however many we have) and figure out which are high-risk patients. That might be in their clinical notes, or the social determinants of health might be documented from the last visit. And obviously we’ll have their vitals from the last visit and any other findings the clinicians have noted.
All of this will be taken into account, along with patients' demographic information and all that history, and it will categorize patients into three categories: high risk, medium risk and low risk, so that the clinicians can actually focus on real high-risk patients, or at least before going to visit a patient, they will know that, okay, this patient is a high-risk patient, I need to be very careful.
HOMECARE: Have you as an organization changed the documentation that staff are making to help provide this information?
MULLANKANDY: Yes, we had to add some custom questions regarding social determinants—some transportation-related questions, housing-related questions and food security-related questions. Some of them are already part of OASIS, so we kept them. But we had to add a few additional questions on top of that so we’ll get a clear overview of which patients are at risk.
HOMECARE: What do you think are the obstacles to adoption?
MULLANKANDY: AI is still relatively new; there are two or three different schools of thought about it. Many physicians and clinicians are okay with it, but others are still skeptical. And some of their concerns are valid. … Some clinicians still believe AI is going to replace them. Some believe we could be inaccurate and the patients would suffer (which is also true, but we have mitigation measures). So it will be slow progress, but one thing I can tell you without a doubt is that it’s definitely here to stay. It’s not going away. You either adapt to it and be really good at it or somebody who’s not using it will become obsolete.