DM Monitoring
SAN FRANCISCO: Microsoft has unveiled software that can help spot “deepfake” photos or videos, adding to the list of programs designed to fight the hard-to-detect images ahead of the US presidential election.
The Video Authenticator software analyzes an image or each frame of a video, looking for evidence of manipulation that could even be invisible to the naked eye.
Deepfakes are photos, videos or audio clips altered using artificial intelligence to appear authentic and are already targeted by initiatives on Facebook and Twitter.
“They could appear to make people say things they didn’t or to be places they weren’t,” said a company blog post on Tuesday.
Microsoft said it has partnered with the AI Foundation in San Francisco to make the video authentication tool available to political campaigns, news outlets and others involved in the democratic process.
Deepfakes are part of the world of online disinformation, which experts have warned can carry misleading or completely false messages.
Fake posts that appear to be real are of particular concern ahead of the US presidential election in November, especially after false social media posts exploded in number during the 2016 vote that brought Donald Trump to power.
Microsoft also announced it built technology into its Azure cloud computing platform that lets creators of photos or videos add data in the background that can be used to check whether imagery has been altered.
The technology titan said it plans to test the program with media organizations including the BBC and the New York Times.
Microsoft is also working with the University of Washington and others on helping people be more savvy when it comes to distinguishing misinformation from reliable facts.
“Practical media knowledge can enable us all to think critically about the context of media and become more engaged citizens while still appreciating satire and parody,” the Microsoft post said.