Commentary
Article
Author(s):
With the help of new and complex algorithms and self-learning models, we are currently privy to what we may look back on as the golden era of AI.
Artificial intelligence (AI) in the health care are market is projected to grow from $14.6 billion in 2023 to a shocking $102.7 billionby 2028. Historically, technology that closely reflected what we currently know as AI was conceptualized as early as the 1950s.
The medical field has been using AI in clinical settings, data collection, conducting research, and in various care modalities, such as radiology, even as a tool assisting with surgical procedures, since the 1970s. But now, non-clinical AI adoption is catching up, and we’re embarking on what’s sure to be the beginning of a long and winding road to widespread adoption.
With the help of new and complex algorithms and self-learning models, we are currently privy to what we may look back on as the golden era of AI. But it hasn’t been all blue skies when it comes to societal buy-in due to fears of a total AI takeover, skepticism about inaccuracy, and ethical concerns, such as data privacy and racial biases.
Fortunately, these concerns are front of mind for many and the technology promises to combat these doubts over time. As technology leaders, it’s our responsibility to utilize our toolset to build trust with patients, and in this article, we will share how to bring that to fruition.
Today’s headlines are amplifying everything AI is capable of, and although there are a great number of educational articles available, much of the news remains focused on cynicism. AI faces resounding distrust that generally stems from the idea that AI is taking over our jobs and the world.
These fears, however, may be vastly unfounded. Many experts are not concerned about AI and total world domination. What they are concerned about is the ethical implementation of this new and exciting technology.
An overarching concern that has arisen specifically related to the medical field is that of racial biases. Even global leaders, such as the World Health Organization (WHO), have made statements calling for the responsible deployment of AI in the field, specifically calling out misleading information that may stem from biased data.
Luckily, world leaders and governments are working to set perimeters to ensure that these issues are addressed effectively and consistently as AI grows. The United States is still in the very early days of regulation conversations; however, lawmakers are actively introducing bills that propose the creation of an agency to oversee AI, consequences for technology that spreads disinformation, and the requirement of licensing for new AI tools, among others.
Recently, 7 tech giants unveiled a list of guidelines aimed at enhancing the safety of their AI technologies. Among the guidelines are the implementation of third-party security assessments and the addition of watermarks to AI-generated content to combat the dissemination of misinformation.
However, this responsibility does not solely fall upon legislators. We as technology and health care leaders also have a hand to play, and individual organizations should feel encouraged to take their own steps toward transparent and ethical AI. For example, some hospitals have already started to form their own “AI code of conduct” to ensure patients are aware of their intentions and internal guidelines. It is our responsibility to ensure that the tech we are creating is built upon ethical algorithms and that all starts with the very foundation of our processes.
As AI leaders, we are currently challenged to make ethical AI, then communicate to the masses our commitment to safety. It can be difficult to properly convey the importance of such a practice, but there are a few overarching ways we can actually deliver on our promise.
To mitigate biases, AI systems need to be carefully designed and trained to uphold fairness and equality. The presence of bias in AI algorithms can reinforce prevailing societal disparities, resulting in discrimination, particularly if AI is utilized for research in the medical field.
In order to truly hold ourselves accountable, organizations must be honest and transparent about how their AI systems are built. Gaining insights into the decision-making mechanisms of AI algorithms is crucial to tackling concerns, guaranteeing adherence to legal and ethical norms concerning data privacy, and fostering trust between users and AI systems.
As AI gets all of its information from data, ethical AI vows to protect individuals’ personal information. It’s critical that companies implement rigid privacy policies, acquire customer consent, and store data in a secure manner to maintain secrecy and establish trust with patients.
The potential influence of AI technologies on both physical and digital environments highlights the importance of implementing safety and risk reduction protocols, in the event that something does go awry. Ethical AI frameworks prioritize safety protocols, comprehensive testing, and risk assessment to avert accidents, errors, or unforeseen outcomes.
There is no one size fits all approach to widespread AI adoption, and it will likely take a while to get majority buy-in within the medical field. The factors at play shift every day, and many companies are struggling to keep up, let alone blaze the trails.
However, empathy should remain at the forefront of decision-makers’ minds when choosing how to approach the challenge of gaining patient trust. At the end of the day, it will be the organizations that actively work to combat biases in health care AI, transparently toe the line between over-reliance and strategic utilization, actively confront negative stigmas, and remain flexible, that gain their patients’ trust.
About the Authors
Mark McNasby, CEO of Ivy.ai.
Ryan M. Cameron, EdD, VP of Technology and Innovation, Children’s Hospital & Medical Center in Omaha, Neb.