AI deepfakes spur calls for more control

BEIJING: An artificial intelligence-generated deepfake of Chinese actress Wen Zhengrong’s face and voice was used by unscrupulous merchants to impersonate her in livestream sales, prompting increased calls for stronger and more tailored regulation and penalties from internet platforms and the law.

The discovery was made last week when Wen appeared to simultaneously host three different early morning livestream rooms on social media, wearing different outfits and promoting different products. The Wen “clones” looked and sounded strikingly similar to Wen, a feat that quickly ignited online discussion.

According to a China Media Group report on Wednesday, the forged images were produced either by clipping past videos and screen recordings or by taking earlier livestream footage of Wen and running it through AI-based deep synthesis, including voice alteration.
“These AI tactics confuse the public. My image and likeness have been infringed, and it is deeply hurtful,” Wen said in the video report. She added that if viewers who trust her were misled into buying counterfeit goods, “I would feel truly sad.”

Li Ya, a partner at Zhongwen Law Firm in Beijing, told China Daily that such conduct was suspected of violating Wen’s right of portrait and may also harm her right of reputation.
Using someone’s image for profit without authorization infringes on portrait rights, he said.
“If sellers speak in her name and make false or exaggerated claims, that will negatively impact a public figure’s reputation.”

Wen’s team said that once the fake clips began circulating, they filed reports around the clock, flagging about 50 impersonation accounts in one day, according to CMG.

Some livestreaming accounts were taken down, they said, but others quickly reappeared in new forms. Wen’s staff noted that certain merchants can fabricate content by extracting brief footage and relying on AI functions built into video-editing apps, while the team faces a much higher burden to preserve evidence and defend their rights.

Li said it is unrealistic to expect victims alone to safeguard their rights.

“Rule-breaking merchants can open new accounts at will and face almost no cost for infringement,” he added.

He noted that social platforms have a duty to deploy technology to detect whether AI tools are being used improperly in livestreams or short videos, and to penalize offending accounts as well as the operating companies and teams behind them, in order to prevent harm to third parties.

In September, new regulations on labeling AI-generated synthetic content, released by the Cyberspace Administration of China and other agencies, took effect. The rules require clear “AI-generated” labels on synthetic faces and videos. –The Daily Mail-China Daily news exchange item