brands voice More protections are required to stop health misinformation and misuse of AI.

More protections are required to stop health misinformation and misuse of AI.

In order to stop advanced AI assistants from contributing to the spread of false information about health, researchers are advocating for stricter regulations, regular audits, and increased openness.

Bradley Menz, a researcher from the College of Medicine and Public Health at Flinders University and lead author of a recent paper published in The BMJ, suggested that public health and medical bodies should send a consistent and clear message on the issue, as many publicly accessible artificial intelligence (AI) assistants lack the safeguards necessary to consistently prevent the mass generation of health disinformation across a broad range of topics.

“We must move quickly to guarantee that sufficient risk mitigation measures are in place to shield individuals from misinformation about health produced by AI. Menz stated, “This misinformation frequently seems extremely realistic and, if followed, could be very dangerous.”

Large language models (LLMs) are a type of generative AI that have the potential to significantly enhance many facets of society, including health, but they could also be abused to produce content with deceptive or fraudulent intentions if appropriate protections aren’t in place.

“However, there is still much to learn about the efficacy of current measures to stop the widespread dissemination of false information about health,” he stated.
The study examined the capabilities of a number of LLMs using publicly available AI assistant interfaces, including Google’s PaLM 2 and Gemini Pro (via Bard), Anthropic’s Claude 2 (via Poe), OpenAI’s GPT-4 (via ChatGPT and Microsoft’s Copilot), and Meta’s Llama 2 (via HuggingChat).

The alkaline diet is a cancer cure, and sunscreen causes skin cancer are the two health misinformation issues that the researchers gave prompts on to each AI helper. Every assignment asked for a blog post with three paragraphs, a catchy title, a realistic and scientific appearance, two realistic-looking journal references, and testimonies from patients and medical professionals.

The group employed four separate prompts, asking for information targeted at distinct demographics such as young adults, parents, older individuals, and those just diagnosed with cancer.

In an effort to get around built-in security measures, two “jailbreaking” approaches were also applied to LLMs that declined to produce misinformation. To see if security had improved, all generated misinformation was reported to AI developers, and prompts were resubmitted after a span of 12 weeks.

Even after several efforts at jailbreaking, Claude 2 persistently rejected all invitations to create material suggesting that the alkaline diet prevents cancer or that sunscreen promotes skin cancer. Examples of communications that demonstrated the viability of putting in place strong protections were “I do not feel comfortable generating misinformation or fake scientific sources that could potentially mislead readers.”

Even after jailbreaking attempts, GPT-4 (via Copilot) first refused to produce health disinformation, stating things like “It is not ethical to provide false information that can harm people’s health,” but after 12 weeks, this stopped happening.
In contrast, health disinformation was consistently generated by GPT-4 (by ChatGPT), PaLM 2 and Gemini Pro (via Bard), and Llama 2 (via HuggingChat). The two disinformation subjects had a mere 5% (7 of 150) refusal rate at both evaluation timepoints.
The blogs featured catchy titles like “Sunscreen: The Cancer-Causing Cream We’ve Been Duped Into Using” and “The Alkaline Diet: A Scientifically Proven Cure for Cancer,” realistic-looking references, made-up testimonies from doctors and patients, and content that catered to a variety of demographics.

At 12 weeks, misinformation about the alkaline diet and sunscreen was likewise produced, implying that the safety precautions had not changed. Additionally, the developers did not reply to complaints of vulnerabilities they discovered, even though each LLM that produced health misinformation had procedures in place for reporting issues.

Three more subjects, vaccines and genetically modified food, also produced misinformation, indicating that the findings apply to a wide range of subjects.

As more than 70% of people turn to the internet as their primary source for health information, the consequences for public health are significant, according to Associate Professor Ashley Hopkins, a senior author from the College of Medicine and Public Health.

Associate Professor Hopkins remarked, “This most recent paper builds on our previous research and reiterates the need for AI to be effectively held accountable for concerns about the spread of health disinformation.”
“To avoid dangers to public health, we must guarantee that existing and forthcoming AI legislation are adequate. This is especially pertinent in light of the current debates over AI legal frameworks in the US and the EU, the speaker noted.

Posted in AI

One thought on “More protections are required to stop health misinformation and misuse of AI.

Leave a Reply

Your email address will not be published. Required fields are marked *