Microsoft has announced new tools to help combat deepfakes, reported Computing. The idea is to combat content such as video, audio, or photographs that are edited to make it look like someone has done or said something that they, in fact, never did.
According to the company, its first tool – dubbed ‘Video Authenticator’ – can analyse an image or video clip to determine whether it has been edited using AI. The tool will then provide a confidence score, indicating the chance that the media has been manipulated. In the case of a video, the tool shows a confidence score in real-time on each frame as the video plays… According to Microsoft, its new tool works by ‘detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.’ The company said that its video authenticator tool was created using a public dataset from Face Forensic++, and has been tested on the DeepFake Detection Challenge Dataset.
Check It Out: Microsoft Unveils New Tools to Help Fight Deepfakes
Last week I received a Facebook Message from someone I didn’t know. Purportedly from a 20 something woman and one who has a new account, I get these every other month or so. I shared the sender’s photograph to my timeline and a number of my friends commented that it did not look like a real person. Indeed I am sure that that it was a digital creation.