Google Takes Action to Remove Deepfake Images and Videos from Search Engine
Google has announced measures to prevent inappropriate deepfake images and videos generated with artificial intelligence (AI) from appearing in its search engine. The company has made it clear that AI-generated deepfake content will not be welcomed in its search engine results.
Deepfake technology involves swapping one person’s face with another’s, making it appear as though the video features someone who isn’t actually in it. With advances in AI technology, creating deepfake images or videos has become significantly easier.
Google is adopting a new policy to remove such content from its search engine, and for those images or videos that cannot be deleted, they will be pushed down the search results. The company will work with experts to address this issue and improve the system for combating deepfake content.
Google is also making it easier for individuals targeted by deepfake images or videos to request their removal. Various Google systems will delete all copies of the deepfake content upon request.
According to the company, it is not possible to remove such content from the search engine 100%, which is why the search ranking system is being improved. In the new ranking system, inappropriate content will be pushed lower in the search results, with efforts to minimize its visibility.
Deepfake technology became mainstream in 2019, and experts have warned that it could be used to create non-consensual explicit content to blackmail individuals or increase political conflicts. Initially, deepfake images and videos were relatively easy to detect, but the technology has improved significantly.
A study from August 2022 revealed that deepfake technology is increasingly used in cyber attacks, posing a growing threat to the real world. According to VMware’s annual Response Threat Report, the use of this technology to alter faces and voices increased by 13% in 2021.