ImageForNews 2 Navigating the minefield of AI in healthcare: Balancing innovation with accuracy

Navigating the minefield of AI in healthcare: Balancing innovation with accuracy

Researchers discuss recent developments in generative artificial intelligence (AI), the significance of the technology in today’s world, and the potential risks that must be resolved before large language models (LLMs) like ChatGPT can become the reliable sources of factual information that we believe them to be in a recent “Fast Facts” article published in the journal BMJ.

What is generative artificial intelligence (AI)?

“Generative artificial intelligence (AI)” refers to a subset of AI models that produce context-dependent content (text, images, audio, and video). These models also serve as the foundation for natural language models that power productivity apps like Grammarly AI and ChatGPT, as well as AI assistants like Google Assistant, Amazon Alexa, and Siri. This technology is among the fastest-growing fields in digital computation and has the potential to significantly advance many different societal domains, such as medical research and healthcare.

Regretfully, the development of generative AI—particularly large language models, or LLMs—has progressed much faster than ethical and safety oversight, raising the possibility of grave effects that could be unintentional or intentional (malicious). According to research, over 70% of consumers get their health and medical information via the internet, and every day, more users use LLMs like Gemini, ChatGPT, and Copilot for their questions. The current article focuses on three areas where AI is vulnerable: privacy problems, health misinformation, and AI faults. It draws attention to the initiatives taken by cutting-edge fields to address these vulnerabilities, such as AI Safety and Ethical AI.
AI-related errors

Data processing errors are a common problem for all AI technologies. Erroneous or misleading information gets harder to identify as input datasets get larger and model outputs (text, audio, photos, or video) get more complex.
These mistakes can become very expensive very quickly for non-medical members of the public who are unable to distinguish between genuine and false information. This is particularly true when it comes to incorrect medical information. These mistakes could affect even highly qualified medical experts, considering the increasing number of studies employing generative AI and LLMs for data analysis.

Fortunately, a variety of technological approaches are presently being developed with the goal of reducing AI errors. One particularly promising approach is the creation of generative AI models, which “ground” themselves in data obtained from reliable and authoritative sources. Presenting an output of the AI model that includes “uncertainty” is another technique. When there is a lot of ambiguity, the user will be able to resort to reliable information sources since the model will also indicate how confident it is in the authenticity of the data it presents. Certain generative AI models currently include citations in their output, which encourages the user to learn more before taking the model’s conclusions at face value.

misinformation about health

Disinformation differs from AI hallucinations in that the former are malevolent and intentional, while the latter are unintentional and incidental. Although spreading false information has long been a part of human culture, generative AI offers a previously unheard-of platform for producing “diverse, high-quality, targeted disinformation at scale” at nearly no cost to the bad actor.

bias and privacy

In order to protect the privacy of both the users of the data and the patients whose data the models were trained on, data used for generative AI model training, particularly medical data, needs to be vetted to make sure no identifiable information is included. AI models typically contain privacy terms and conditions for data that is crowdsourced. Participants in the study are required to make sure they follow these guidelines and refrain from giving any information that could identify the volunteer in question.

AI models have an inherent tendency to skew data according to the training set of the model, which is known as bias. Large datasets, which are often downloaded from the internet, are used to train most AI models.

Conclusions

Generative AI models, the most popular of which include LLMs such as ChatGPT, Microsoft Copilot, Gemini AI, and Sora, represent some of the best human productivity enhancements of the modern age. Unfortunately, advancements in these fields have far outpaced credibility checks, resulting in the potential for errors, disinformation, and bias, which could lead to severe consequences, especially when considering healthcare. The present article summarizes some of the dangers of generative AI in its current form and highlights under-development techniques to mitigate these dangers.


Posted in AI

Leave a Reply

Your email address will not be published. Required fields are marked *