Date: January 19, 2024
Generative Artificial Intelligence has high potential to revolutionize healthcare. But, WHO has warned about the risks associated with its developments.
Since the birth of GenAI, Artificial Intelligence has been blooming in more than 50 industries worldwide. One industry that the global AI research community is focusing on is healthcare. With the capabilities of GenAI, multiple critical functions like rapid diagnosis and drug development can be transformed to a great extent. But, the developments of AI in healthcare are not just positive.
After a recent study, WHO has found rising concerns over GenAI’s risks in healthcare. Large multi-modal models (LLMs) are quickly being adopted for AI developments in healthcare. The capability of LMMs can take multiple data points from images, text, and videos for learning, understanding, and executing instructions. The key highlight of this technology is that it can give output in forms other than the type of data fed into its algorithm.
“It has been predicted that LMMs will have a wide use and application in healthcare, scientific research, public health, and drug development”, states an official from WHO.
While WHO outlined various areas of benefit for healthcare organizations, it also shared a documented list of harms it can cause to the system.
Any artificial intelligence technology requires data from existing systems, posing major risks to its learning patterns. As the healthcare industry rapidly integrates LMMs into its AI usage, it also has a high risk of producing false, inaccurate, misleading, or biased outcomes.
LMMs will be majorly used to create actions from AI and can even get involved in patients' treatment process. A bias from poor data learning, including race, ethnicity, ancestry, sex, gender identity, or age, can adversely affect the patients and their treatment experience. In extreme cases where this tech is used in direct treatment or cure methods, it can also lead to inevitable harm.
AI may have become a limelight piece for the last three years, but it is still relatively new for humans to understand its true power. Not only is our knowledge about AI limited, but adequate regulations to prevent misuse or compensate the public in case of any event are also lacking.
We need transparent information and policies to manage the design, development, and usage of LMMs.
- Jeremy Farrar (WHO Chief Scientist)
The medical research field works in a symbiotic nature. Sharing and receiving information transparently within the peer community is critical to fasten development timelines and ensure error-free developments. However, the AI community is not as transparent or collaborative. While OpenAI and other AI platforms are open-source, many private products reserve their share of breakthroughs for individual benefit. This can cause a huge gap in the development period and bring errors in documentation.
The major contributors to the development of AI are tech giants. Their ethical boundaries, business orientation, and flexibility will be essential in defining the future pathways of AI development. Strict and quick regulations must be in place across governments to ensure the safe and productive development of AI technologies. Mandatory restrictions and information sharing in healthcare need to be in place to support the healthy and unbiased development of AI capabilities.
By Arpit Dubey
Arpit is a dreamer, wanderer, and tech nerd who loves to jot down tech musings and updates. With a knack for crafting compelling narratives, Arpit has a sharp specialization in everything: from Predictive Analytics to Game Development, along with artificial intelligence (AI), Cloud Computing, IoT, and let’s not forget SaaS, healthcare, and more. Arpit crafts content that’s as strategic as it is compelling. With a Logician's mind, he is always chasing sunrises and tech advancements while secretly preparing for the robot uprising.
OpenAI Is Building an Audio-First AI Model And It Wants to Put It in Your Pocket
New real-time audio model targeted for Q1 2026 alongside consumer device ambitions.
Nvidia in Advanced Talks to Acquire Israel's AI21 Labs for Up to $3 Billion
Deal would mark chipmaker's fourth major Israeli acquisition and signal shifting dynamics in enterprise AI.
Nvidia Finalizes $5 Billion Stake in Intel after FTC approval
The deal marks a significant lifeline for Intel and signals a new era of collaboration between two of America's most powerful chipmakers.
Manus Changed How AI Agents Work. Now It's Coming to 3 Billion Meta Users
The social media giant's purchase of the Singapore-based firm marks its third-largest acquisition ever, as the race for AI dominance intensifies.