Facebook’s new Artificial Intelligence technology not only identifies Deepfakes, it can also gives hints about their origin / Digital Information World
Artificial Intelligence (AI) created videos and pictures have become very popular and that can also lead to serious problems as you can create fake videos and manipulated pictures of any kind to get anyone into trouble. Deepfakes use deep learning models to create fictional photos, videos, and events. Nowadays, deepfakes look so realistic that it becomes very difficult for a normal human eye to tell the real picture from the fake one. The Facebook AI team has worked with a group of Michigan State University which can not only identify the fabricated images or videos, but can even trace the origin.
The latest technology from Facebook checks the similarities from a compilation of deepfakes data sets to see if they have some common ground and looks for a distinctive model, such as small specks of noise or small quirks in the color range of a photo. By recognizing the small fingerprints in the photo, the new AI model is able to distinguish details about how the impartial network that produced the photo was invented, e.g. B. how big the prototype is and how it was made.
The experts experimented with the AI technology on the Facebook platform by working on data from around 100,000 fake images created by 100 different creators, each taking a thousand snapshots. The aim was to make the AI technology competent enough with just a few images, while the rest of the images were captured and then shown to the technology as an image with unknown inventors and from where they were created. The experts working on this experiment declined to show how precise the artificial intelligence assessment was during the test, but they assured that they are doing their best to make the technology even better, the moderators of the platform can help to identify the corresponding fake content.
Deepfakes writer wonders how effective the technology will be beyond the lab environment and faces fake images wildly on the internet. The author went on to say that fake images that were identified were based on the abstract database and then organized in the laboratory. There is still a chance YouTubers could create lots of realistic looking videos and pictures that the system can bypass. The experts didn’t have any other research data to compare their results with, but they know they are making this system work much better than before.