News
Article
Pharmacy Practice in Focus: Health Systems
Author(s):
At ASHP Pharmacy Futures 2024, Casey Olsen, PharmD, explains that AI can affect patient trust in providers, with implications for the importance of learning to balance use of technology with human elements in clinical settings.
Artificial intelligence (AI) is being used in the intersection between the provider and the patient, with implications for the balance of patient trust in their provider, explained Casey Olsen, PharmD, an informatics pharmacy manager – inpatient at Advocate Health, during a presentation at the American Society of Health-System Pharmacists (ASHP) Pharmacy Futures 2024 in Portland, Oregon. Olsen explained that studies have shown that the simple act of having a computer in the room with a patient can make the patient feel as though the provider has less compassion and less professionalism.
“But technology promises a way to fix that by bringing in technology, such as a microphone, that records an entire interaction [invisibly],” Olsen said during the ASHP presentation. “I find it really fascinating as an opportunity to increase that efficiency, to help the provider feel that greater sense of vocational benefit in the ability to not have to worry about that summarization of the discussions that they had with a patient.”
However, Olsen noted that when he brings this type of topic forward to university students, they have a different reaction.
“The very moment I say patient interactions can just be recorded, the entire room, [I can see] their faces—it’s like the blood is sucked out of them. It's almost like a dystopian future to them, where they feel that their data is gone and it's not in their hands anymore,” Nelson said. “There are people that bring up concerns that they don't want to bring up things because they're being recorded. At the same point in time, when thinking through that, there's this question of where the efficiency and the transparency lies.”
Olsen explained that currently, not every health system has this recording technology in place in examination rooms. Additionally, not every health system that uses that technology makes it transparent that those interactions are being recorded.
“If we make it transparent, does that give [patients] the opportunity to voice those concerns, or does that just make them want to be quiet and hide those things that they need to discuss in those kind of relationship discussions,” Olsen said. “At the end of the day, transparency is important, but it also is important for us to consider even ahead of transparency, whether or not the technology is meaningful to intervene and use.”
Furthermore, patients have shown diverse preferences across all aspects of care in relation to a desire for transparency around the use of AI tools in their care. Specifically, patients show diverse preferences around a desire for transparency on the use of AI-based decision making tools when they meet with their provider.
“When talking about the use of technology to make decisions, that idea of listening to a computer or listening to a provider, there's a really interesting survey that just came out that said basically the room is split. Approximately 50% of people prefer AI-guided care, and 50% prefer human-guided care,” Nelson said. “If you told me that an AI was going to decide my care, I would be a little worried right away, but the fact that we have that very diverse expectation set and excitement from the patient population demonstrates that we need to be really cognizant of how we can leverage technology to be the most positive for that patient's experience within the health care system.”
Deciding When to Use AI
At NASA, they have high-risk workflows where they use technology to pilot spaceships containing humans to the moon, Nelson explained. When putting human life in the hands of technology, it is also important to think about the dynamic from a user-trust perspective. In NASA's case, they need to ensure that astronauts feel comfortable putting their lives into the hands of that technology at that point in time.
“NASA surveyed the people that were going to be using the technology along with the scientists that were coming up with it to determine various aspects of user trust. We have a scale then that exists as a result of applying that rubric that says that the highest level of automation is when you're automating it completely, [which is when] the computer executes the decision without any human interference. At the lowest level, it's all in the human's hands. Then there's plenty in the middle,” Nelson said.
In the pharmacy space, Nelson explained that this concept can be applied very similarly, such as in the case of autoverification. With the use of autoverification, Nelson noted that there are 2 extremes.
“We have autoverifying medication orders without any pharmacist intervention at that point in time [sic], or you have a humans autoverifying that medication order,” Nelson said. “Regardless of that technology, I will say, from a clinician's perspective, I get questions to this day about the things that we know are appropriate to autoverify, as to why that order was autoverified. The systems we have in health care today don't provide that clear transparency in every situation.”
For this reason, thinking about how to maneuver around the issue of trust and transparency will be even more important when moving toward the embracing a technology that is more complex, according to Nelson.
“I liken this whole thing back in terms of autoverification to a clinical scenario,” Nelson said. “Using acetaminophen for mild pain as an example, I believe, if we all sat together in a room, we could probably define criteria where we feel pretty confident that acetaminophen is safe for autoverification.”
Nelson explained further that if all orders required human verification and the level of trust in the system fell in the direction of no automation, there would likely be an amount of orders that would raise other concerns. Specifically, there would a higher risk of pharmacists missing opportunities where it would be valuable to intervene, and this opportunity would be missed because of the high level of white noise present in the process as a result of the high number of orders, according to Nelson.
“Today, we have an opportunity to think about how to value the vocations [of pharmacists], how to give transparency [to patients], and how we can empower the clinician to have that knowledge [about the use of this technology] for the interactions they have with patients or with the health care team,” Nelson said. “For transparency, [we] could use more studies providing data for pharmacy to help us assess how to best be transparent within health systems. There are things you can learn from other disciplines, and it likely requires a collaborative approach [to do so].”
Reference
Olsen C. (Joseph A. Oddis Ethics Colloquium) Ethical Dimensions of AI in Pharmacy Practice, Part II: AI and The Patient Experience. American Society of Health-System Pharmacists Pharmacy Futures 2024; June 8-12; Portland, Oregon.