News
Article
Author(s):
Although regulation takes time, real safety concerns for patients in the rapid development of AI technology in health care are present today.
In the digital technology space, rapid product development is key, explained Stephen V. Speicher, MD, MS, senior medical director and head of Healthcare Quality and Safety at Flatiron Health, during a session at the Association of Cancer Care Centers’ (ACCC) Annual Meeting & Cancer Center Business Summit (AMCCBS) in Washington DC. In the rapidity of this development in the tech industry, consumer safety may not—and has not—been a significant consideration, Speicher explained.1
“You don't have to be a tech guru to understand that the early tech motto at Facebook was ‘to move fast and to break things,’” Speicher said during the session. “While this has led to a tremendous amount of digital technology in the space, as well as a multitrillion dollar industry and tremendous shareholder value, it has also created a culture where, historically at least, safety and quality have not been front of mind, specifically on the consumer side.”1
In the classic software development model, a team of engineers, designers, and product managers work to design, develop, and ultimately roll out software products to consumers as fast as possible, according to Speicher. This process can take weeks or months, depending upon the company, which is a far cry from the decades or years product development can take in much more risk-averse industries, such as aerospace or drug development.1
“What that really led to is a culture where safety might not be front of mind, and it might be thought of as an ultimately secondary outcome for product development,” Speicher said. “The biggest example of this can be seen in social media. In social media platforms, recent numbers have shown that Meta, the company that owns Facebook, Instagram, and WhatsApp, has around 4 billion daily active users at this point. So, with more than half of the world's population using one of these social media platforms every single day, you would hope that safety was really thought about as these things were being built out.”1
However, a recent systematic review showed that there is a 70% increase in self-reported symptoms of depression among groups that use social media on a frequent basis, Speicher explained. Yet, while consumer-facing digital technology does not have the most ideal track record for safety, there is positive news when it comes to the health care tech industry.1
“Hopefully this is not surprising, but we've been much more vigilant in our pursuit of safe and high quality technology in health care. But it may come at a cost because I do think we're the sole industry that is still keeping fax machine companies in business,” Speicher said. “But I can personally attest to the fact that as we've been building out things like the [electronic health record (EHR)] and other software devices, we've done so while keeping safety front of mind. So, this has never been more important than right now at this moment in technological history.”1
In product development in the tech industry, there is a concept known as the “hype cycle,” which Speicher explained is the actual technical term with a real methodology.1 Developed by the market research firm Gartner, the hype cycle follows the journey of new technologies from initial introduction to the peak of inflated expectations, through the trough of disillusionment, and ultimately, to their plateau of productivity.1,2
“In 2023 and 2024, Gartner puts artificial intelligence [AI] at the peak of inflated expectations, which essentially means that it's right now at the peak of the hype cycle,” Speicher said. “I think we can all see that in how AI is talked about in day to day conversations. I think, right now, if you were to ask any random person about AI, they're either going to say that it is the savior that's going to come down and do everything possible to solve every single problem out there, or, on the other side, the AI robots are going to take over the world and destroy us—and there really is no in between when we're talking about AI.”1
According to Speicher, our current place in the hype cycle is an optimal time to be thinking about quality and safety and prioritizing those areas over rapid innovation. Speicher explained that it is important for the health care industry to understand now what the opportunities are and what the risks are.1
“Health care, for the first time, is not behind in the use of [current technology],” Speicher said. “With 10% of physicians using ChatGPT [according to a survey conducted in the summer of 2023], we need to stay on top of safety and quality.”1
In a recent story published by the Today Show, the story exemplified the messaging the general public is receiving around AI in a health care context, according to Speicher.1
“This [story] is actually something that my mom sent me as she is an avid watcher of the Today Show,” Speicher said. “This story is of a young boy whose mom brought him to multiple different doctors, and nobody could find the diagnosis. But she put his symptoms in ChatGPT, and they found the correct diagnosis—this is the messaging that [the public is] seeing, and this is why we need to be having these conversations today about quality and safety.”1
In order to effectively evaluate quality and safety as it applies to health care, there are 3 fundamental concerns that should be considered in relation to AI tools in health care, Speicher explained. These fundamental concerns, according to Speicher, are based on his understanding of how these tools are being built, his experience in other aspects of health care information technology (IT), and what leading industry experts are saying.1
“The first area of crucial potential risk lies in the underlying foundational data infrastructure that underpins these models,” Speicher said. “This is especially problematic as we think about health care data specifically. One of the major moves in health care data is to this world of interoperability where, communication and data is being transferred back and forth very seamlessly. While that era is here, I would say it's still fairly nascent. So, what that means is that data standards across the health care IT ecosystem are improving, but in their current state, they still have a significant amount of variability.”1
According to Speicher, this is especially true in terms of historical data.1
“If you have an AI model that's learning relationships of hemoglobin to disease progression for a patient, what happens if those hemoglobin values are inaccurate? The values are incorrect and the units are incorrect, and it is learning off of these historical data,” Speicher said. “Furthermore, these data standards that I'm talking about, they're proving to be really only applicable to Certified Health IT.”1
Health IT certification is overseen by the US federal government’s Office of the National Coordinator for Health Information Technology (ONC).3 The ONC’s certification program ensures that Certified Health IT meets the technological capability, functionality, and security requirements that have been adopted by the HHS.3 Speicher explained that health IT that is not certified may make claims regarding the capabilities of their health technology, but if they are not Certified Health IT, they may not have the same standards for data quality that the Certified Health IT companies have.1
“The second area that I want to talk about is the potential for bias [in AI tools in health care],” Speicher said. “As we know, in the past few years, we've seen a major push for understanding existing inequities in health care and making sure we are trying to work towards a world of health equity for individuals regardless of race, gender, sexual orientation, income, and many other variables.”
According to Speicher, like many things in technology, AI has the ability to really move things in one direction or another.
“It could be a movement towards improved equity, or it can be a movement away from equity and towards greater inequities in health care,” Speicher said. “So how do we think about that within our AI tools? Again, this goes back to our data example—what happens if a model is based off of data that is not inclusive, and we're coming up with decisions and making conclusions that are really only relevant for a subset of the population and really shouldn't be applied to the broader population?”1
The third and final concern that Speicher noted should be considered in in relation to AI tools in health care is how these tools are actually being used, and how it can be ensured that they're used in the most appropriate way.1
“What I've learned working in health care IT is that these tools are really only as safe as the way they're being used by clinicians,” Speicher said. “The example I like to use is an example of a car. So of course, the car needs to be safe, the brakes need to work, the transmission needs to do whatever transmissions do—my example breaks down really quickly because I don’t know cars very well—but, ultimately, it's the driver that dictates the overall safety profile of the car. If you have a 16 year old with a proven undeveloped prefrontal cortex vs someone that's a little bit older who is a little bit more risk averse—that's who's really dictating the safety of the product. In health care, it is no different, and AI is no different.”1
According to Speicher, for these tools, it's important to have full visibility for providers on how the tools work, what they are meant to do, and what they are not meant to do. Further, Speicher noted that it is important to understand ways to expedite workflows and not limit the critical thinking of physicians. However, the problem of who is in charge of making sure all of this happens is an additional consideration.1
“What's really interesting to me is this may be one of the very few topics where there's a unanimous opinion that regulation is actually quite important,” Speicher said. “That being said, regulatory frameworks and health care IT is very unique, and not everything is regulated. There's fragmentation on who regulates between the FDA, HHS, and other government agencies. Ultimately, and this is not going to surprise anybody, but these things take a very long time to actually go through.”1
In the meantime, these products are rapidly evolving while regulatory bodies review them, Speicher explained.1
“What we know about the current regulatory landscape at this time is that we have a few definitive inputs on how we think about health care IT and AI in this space,” Speicher said. “The first thing we have is the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence that was released by the White House [in October 2023], and we anticipate something coming out from HHS in 2024, outlining in a little bit more detail what this means specifically for health care.”1
Then there is what is most relevant to Certified Health IT, and that's the ONC’s Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule. This rule basically builds off of what the ONC was doing in terms of clinical decision support. The HTI-1 also creates a new concept known as decision support intervention (DSI), with the ONC beginning to regulate what they call predictive DSIs for the first time.1
“[DSI] focuses on decision support that derives relationships based on training data and calls out the importance of source attributes with a really strong focus on transparency,” Speicher said. “This is the ONC’s attempt to really understand where AI is starting to come into EHR products and other software products as well.”1
State governments are also looking to get involved in this regulation process, and some have started to come up with their own rules. For example, in the state of Georgia, there's a state senate bill that's looking to regulate not EHR companies or software companies, but providers. In Georgia, the bill is looking to regulate the practice of medicine and the safe use of EHRs in practices in that state, Speicher explained.1
“So clearly, they're moving quickly, and there's regulations on the horizon,” Speicher said. “But that being said, these tools are being used actively by providers [now]. So where does that responsibility lie?”1
Speicher noted the relevance of a quote from Professor Fei-Fei Li, PhD, director of Stanford University's Stanford Artificial Intelligence Laboratory, who calls out the fact that we urgently need policymakers to fully understand what AI is now, and that it is going to take a multistakeholder approach to successfully do so.1
“[Li explains that] there needs to be checks and balances [in this process],” Speicher said. “She finishes with her biggest fear being waking up and hearing about the first reported death by suicide from self-diagnosis of ChatGPT.”1
Speicher explained that this concern is an important one and is not one that is off in the future, but is a relevant concern today. If ChatGPT users listen to news stories of ChatGPT solving diagnosis challenges that physicians cannot, and that user then puts their own symptoms into ChatGPT and it spits out a diagnosis and that diagnosis is terminal, the patient may make the decision to take their life into their own hands. Especially if the patient thinks that ChatGPT is as good as a doctor’s opinion, if not better—because that is what media is telling them—then suicide is a real risk.1
Speicher noted, however, that health care professionals do not need to be AI experts in order to safely use AI health care tools and understand current concerns present with their use.1
“But I do think it's really important as health care professionals, providers, and administrators to really understand some of the basics of these tools and understand what questions you need to start asking, and to have an appropriate level of skepticism,” Speicher said.1
The first question Speicher noted he would recommend asking is about the use case.1
“I'd want to understand where this [health tech] is incorporated into the workflow and how risky that part of the workflow is,” Speicher said. “Are we talking about treatment decisions or are we talking about diagnosis? We have to understand how risky that part of the workflow is, and how skeptical we need to be of the tool.”1
Additionally, Speicher noted it is important to understand the data that is training that AI, and to ask questions about the standards the health tech developer has for data quality.1
“Where is the data coming from, and what standards are in place,” Speicher said. “I [also would recommend asking] how often the model is refreshed. We know that in health care, and specifically in oncology, things change rapidly. So how are we making sure the most up-to-date information is being put into this model and training this model to make sure that we're making the right decisions for patients?”1
Speicher noted he would also recommend asking questions about how quality and safety concerns are addressed upon the discovery of such concerns. Specifically, it is beneficial to ask who is in charge of quality and safety, and how they are assessing these tools prior to them being put in front of clinicians.1
“The last question is this: What is the implementation plan for this [health tech]? Again, going back to that safe use of the tool—how am I going to make sure that my end user, my providers, my clinicians, my nurses, whoever's using this tool, are using this appropriately,” Speicher said. “How am I going to make sure that they know how to use this, when to use it, and how to make sure that it's as safe as possible?”1
2 Commerce Drive
Cranbury, NJ 08512