Google has expanded its “Results about you” feature, allowing users to request the removal of sensitive personal information, including government-issued identification numbers and non-consensual explicit imagery, as part of a broader effort to counter identity theft and the growing misuse of deepfake technology.Announced as an upgrade to the privacy controls within Google Search, the enhanced tool gives individuals a more streamlined process to monitor and request takedowns of search results that expose personal data such as passport numbers, national ID details, bank account information and intimate images shared without consent. The move comes amid rising global concern over online impersonation, financial fraud and synthetic media manipulation.
The feature, first introduced in 2022, was designed to help people find and remove personal contact details that appear in search results. With the latest update, Google has expanded the scope of removable content and simplified the dashboard interface. Users can now receive proactive alerts when their personal details surface in search listings and can submit removal requests directly from a centralised hub within their Google account.
Company executives said the changes reflect mounting evidence that identity-related abuse is becoming more sophisticated. Advances in artificial intelligence have enabled the rapid creation of convincing deepfake images and videos, often used to harass individuals or extort money. At the same time, data breaches across industries have increased the circulation of sensitive information online, heightening risks of financial crime and reputational damage.
Under the revised policy framework, Google will consider removal requests for doxxing material, explicit content shared without permission and fraudulent pages impersonating individuals. The company has also updated its guidelines for addressing digitally altered imagery that falsely depicts a person in explicit situations. In such cases, affected users may request the removal of both the manipulated content and related search queries that amplify its visibility.
Digital rights advocates have long argued that search engines play a critical gatekeeping role in either amplifying or limiting harm. While Google does not host most of the content indexed in Search, its algorithms determine how easily such material can be discovered. By lowering procedural barriers to takedown requests, the company appears to be acknowledging the scale of personal risk posed by searchable exposure.
Privacy experts note that identity theft remains one of the most prevalent forms of cybercrime worldwide. Law enforcement agencies in multiple jurisdictions have reported steady growth in cases involving stolen credentials used for financial scams, fraudulent loan applications and social engineering schemes. The expansion of tools aimed at removing identification numbers and financial data from public visibility could reduce the attack surface for criminals, though experts caution that removal from search results does not erase content from the originating website.
Google has emphasised that removal decisions are subject to verification processes intended to balance privacy with the public’s right to information. Content of legitimate public interest, including information tied to professional misconduct or criminal proceedings, may not qualify for removal. The company maintains that each request is assessed individually to avoid overreach.
Technology analysts view the update as part of a broader recalibration across the industry. Major platforms have faced increasing regulatory scrutiny over their handling of harmful content and personal data. In Europe, data protection authorities continue to enforce strict standards under the General Data Protection Regulation, including the “right to be forgotten”. In the United States and parts of Asia, policymakers are debating frameworks to address AI-generated misinformation and digital impersonation.
Google’s enhanced system introduces a more user-friendly notification mechanism. Once individuals register their details, they can receive alerts when new search results match specified personal information. This monitoring function aims to provide earlier detection of exposure, rather than relying solely on manual searches. Users can then initiate removal requests with supporting documentation, and track the status of submissions through the same interface.
Civil liberties groups have welcomed the added protections against non-consensual explicit imagery, a category that has expanded with the accessibility of generative AI tools. Victims of image-based abuse often face prolonged psychological and professional consequences when intimate content circulates widely online. By allowing removal of both original and altered material from search listings, Google is attempting to limit the viral spread that can occur through indexing.
Critics, however, question whether reactive removal tools are sufficient. They argue that stronger preventive measures, such as automated detection of sensitive identifiers and enhanced verification of impersonation sites, may be required to address systemic vulnerabilities. Some also warn that increased takedown capacity must be managed carefully to prevent misuse or censorship of legitimate reporting.
Google says it will continue refining its policies as patterns of abuse evolve. Company representatives have indicated that artificial intelligence is being deployed internally to identify high-risk content categories and prioritise review. Transparency reports detailing the volume and nature of removal requests are expected to provide further insight into how the expanded safeguards operate in practice.
Topics
Technology