Generative Artificial Intelligence has high potential to revolutionize healthcare. But, WHO has warned about the risks associated with its developments.
Since the birth of GenAI, Artificial Intelligence has been blooming in more than 50 industries worldwide. One industry that the global AI research community is focusing on is healthcare. With the capabilities of GenAI, multiple critical functions like rapid diagnosis and drug development can be transformed to a great extent. But, the developments of AI in healthcare are not just positive.
After a recent study, WHO has found rising concerns over GenAI’s risks in healthcare. Large multi-modal models (LLMs) are quickly being adopted for AI developments in healthcare. The capability of LMMs can take multiple data points from images, text, and videos for learning, understanding, and executing instructions. The key highlight of this technology is that it can give output in forms other than the type of data fed into its algorithm.
“It has been predicted that LMMs will have a wide use and application in healthcare, scientific research, public health, and drug development”, states an official from WHO.
While WHO outlined various areas of benefit for healthcare organizations, it also shared a documented list of harms it can cause to the system.
Any artificial intelligence technology requires data from existing systems, posing major risks to its learning patterns. As the healthcare industry rapidly integrates LMMs into its AI usage, it also has a high risk of producing false, inaccurate, misleading, or biased outcomes.
LMMs will be majorly used to create actions from AI and can even get involved in patients' treatment process. A bias from poor data learning, including race, ethnicity, ancestry, sex, gender identity, or age, can adversely affect the patients and their treatment experience. In extreme cases where this tech is used in direct treatment or cure methods, it can also lead to inevitable harm.
AI may have become a limelight piece for the last three years, but it is still relatively new for humans to understand its true power. Not only is our knowledge about AI limited, but adequate regulations to prevent misuse or compensate the public in case of any event are also lacking.
We need transparent information and policies to manage the design, development, and usage of LMMs.
- Jeremy Farrar (WHO Chief Scientist)
The medical research field works in a symbiotic nature. Sharing and receiving information transparently within the peer community is critical to fasten development timelines and ensure error-free developments. However, the AI community is not as transparent or collaborative. While OpenAI and other AI platforms are open-source, many private products reserve their share of breakthroughs for individual benefit. This can cause a huge gap in the development period and bring errors in documentation.
The major contributors to the development of AI are tech giants. Their ethical boundaries, business orientation, and flexibility will be essential in defining the future pathways of AI development. Strict and quick regulations must be in place across governments to ensure the safe and productive development of AI technologies. Mandatory restrictions and information sharing in healthcare need to be in place to support the healthy and unbiased development of AI capabilities.