Last Updated: October 8, 2019 5:50PM
by Mark Aiello with April Crehan
The myriad “unknowns” surrounding the applications of AI in healthcare pose a challenge to its optimal use. As a panelist at our recent life sciences roundtable noted: the more we know, the less resistant we are to advances in AI applications.
To successfully integrate the most cutting-edge applications of AI into the healthcare industry, it thus stands to reason that we all need more education. We can help with uncertainty surrounding regulations by educating government officials and regulatory bodies. To improve acceptance of new devices and approaches, we need to educate the public. And of course, we need to continually educate ourselves.
Educating Regulatory Bodies
Most elected representatives will not have extensive backgrounds in machine intelligence or medical research. They do, however, oversee the regulatory bodies that shape their application. How can we expect government entities to make decisions about regulations and funding without fully understanding what they’re regulating and funding?
In the US, groups like TechCongress and the American Association for the Advancement of Science are already working to place tech- and science-savvy professionals in congressional offices. And we need policies to be well informed so they can help, not hinder, implementation of new technologies.
By broadening our understanding of AI, stakeholders of AI applications could help lower one barrier to adoption: i.e., the shortage of regulations surrounding the use of AI in healthcare. The Partnership on AI is an organization that unites researchers, industry, and civil society organizations. It has made ensuring “that key stakeholders have the knowledge, resources, and overall capacity to participate” a central part of its four pillars.
This drive to educate others about AI is part of a global trend toward increased transparency in healthcare. Plain language summaries of clinical trial results is a prime example of this trend. Though derived specifically from clinical research (in post-trial plain language summaries provided to participants and the public at large), the principle of “translating” technical or scientific content into plain language has much broader applications. Explaining a technical topic in easy-to-understand terms benefits both the officials you’re trying to educate and the public and patients who will be impacted.
Educating the Public at Large
The importance of educating the public about AI in particular in especially important due to a fundamental fear of the unknown. Americans in general are fearful of AI causing more harm than good; European numbers show a similar mix of excitement and unease. (Fear impacts those implementing the new technologies, too.) Educating the public about what risks are realistic may settle some of these fears. In turn, that could increase the possibility of implementing life-saving AI applications.
The public, and especially patients, need and deserve to know the latest news and trends in clinical research. Simple explanations of complex topics benefit scientists as well as laypeople. For example, scientific articles reported in the New York Times have an increased rate of citation. Easy accessibility of resources and education increases the likelihood that patients and caregivers will make informed decisions. Clear explanations and expectations can also improve adherence to study requirements. That, in turn, means improved outcomes for researchers as well as for patients. Plain language improves understanding across cultures and language proficiencies, and that means researchers can access a more diverse data set, which allows AI designers to build more robust systems.
Education of the public at large and government officials is a self-perpetuating loop. This time, though, it’s a positive one. Voters (patients or not) who understand current research are more likely to be interested and invested in useful regulation and funding. This naturally increases the likelihood that an elected official will care about the same. And transparency and education can also help us understand our colleagues and research across disciplines.
Anyone in the clinical trial space knows there is always something new to learn. Educational opportunities like the roundtable discussion we recently hosted are important avenues for anyone exploring the intersection of medicine and AI to share their knowledge.
How else can we use our own experiences to accelerate the research process and the adoption of AI in our fields? We’ve already found ways to come together and push for industry standards surrounding AI. How can we share our successes and failures to improve the work of others?
This knowledge-sharing is a more delicate process than drafting plain language summaries, for example, due to privacy concerns. Still, willingness to share findings while respecting privacy may be the best option to push forward medical innovation.
How can data sharing and collaboration mean more results for everyone faster and easier? Read our article to find out.