Ƶ

Clinician Says Her Voice Was Cloned by AI for Social Media Ad

— "It's going to be a huge problem," says podiatrist Dana Brems

Ƶ MedicalToday
A photo of Dana Brems, DPM

Los Angeles podiatrist and social media personality Dana Brems, DPM, has charged in an that a company used AI to make a fake recording of her voice for an ad.

The post shows Brems reacting, mouth covered in dismay, to what she said was an advertisement that "used an AI clone of my voice to pretend I recommended their product."

Social media posts from Brems regarding the ad -- which appears to be for an ear cleaning device -- have since racked up views, with many commenters pointing to the potential harms of fake health-related recommendations tied to medical professionals.

Brems discovered the ad via Facebook Rights Manager, which picked up the video because the first few seconds are from one she actually made, she told Ƶ in an email.

The video Brems made was of her reaction to a separate video of people sticking an object in their ear, she said, which she described as dangerous, noting "we don't recommend sticking even Q-tips in your ears."

"So what the AI did is they faked the second half of that video," Brems said, making it sound as though she said she recommends using the product in the ad.

Who made the ad, or where they are based, remains unclear. Brems said the Instagram account where she found the ad, as well as an associated website, have been deleted following her post calling attention to the issue.

Hany Farid, PhD, of the School of Information at the University of California Berkeley, told Ƶ in an email that he had analyzed the audio Brems called out, using a model he and colleagues trained to distinguish real voices from AI-generated ones. "This model classifies the audio as AI-generated," Farid said.

This instance is not an isolated one, he noted.

"As voice cloning has improved, I've been seeing an increase in these types of fakes," said Farid, who is also a member of the Berkeley Artificial Intelligence Research Lab.

"In addition to the increasing ease with which deepfake videos can be made, it is also clear that the large social media platforms remain ill-equipped to handle content moderation as it pertains to deepfakes," he added.

Ashley Hopkins, PhD, senior research fellow in the College of Medicine and Public Health at Flinders University in Adelaide, Australia, said that the technology to clone voices or make deepfake videos is "readily available via various online tools," and that, "minimal original audio or visual content is required to create convincing deepfakes without consent."

"The creation of deepfakes without an individual's knowledge raises serious ethical issues," Hopkins told Ƶ in an email. "There's an urgent need for robust regulatory frameworks to ensure AI developers implement strategies to prevent publicly accessible AI tools from facilitating such impersonations, whether it be of healthcare professionals or others."

For Brems, her concern also extends beyond her own experience.

"Once people catch on that they can use AI to impersonate doctors [and] other authority figures," Brems said, "it's going to be a huge problem."

  • author['full_name']

    Jennifer Henderson joined Ƶ as an enterprise and investigative writer in Jan. 2021. She has covered the healthcare industry in NYC, life sciences and the business of law, among other areas.