Mountain View, USA – Google has unveiled an advanced technological step aimed at helping users distinguish between real videos and those created or manipulated using artificial intelligence. This is achieved through new updates to its intelligent model, Gemini.
Gemini analyzes video from multiple angles. These angles include facial expression analysis, image stabilization, and the movement of objects. Additionally, audio is compared with video to detect any technical flaws that might indicate the use of deepfake techniques.
According to Google, the new technology relies on machine learning models trained on vast amounts of visual content. This allows it to recognize artificial, unnatural patterns that the average viewer doesn’t easily notice, especially in videos that quickly go viral on social media platforms.
This move comes amid growing global concern about the use of fake videos for disinformation, influencing public opinion, and manipulating information. This has prompted major technology companies to develop tools to protect trust in digital content.
Google confirmed that the feature will be rolled out gradually to media outlets and content creators, with plans to expand its use later as part of the company’s vision to build a more transparent and secure digital environment.
Observers see this move as a significant step in combating visual manipulation. However, it also reflects the ongoing, complex race between technologies for producing fake content and tools for detecting it. This battle is ultimately about the credibility of the digital world.



