Google has begun scanning all photos stored in users’ Google Photos libraries as part of a broader rollout of its Gemini AI features, marking a significant expansion in how the tech giant processes personal data to power its personalised services. The move, confirmed through updates to Google’s support documentation and observed changes in user account settings, enables the company’s AI models to analyse image content for improved search, organisation, and contextual suggestions within the Gemini ecosystem. Although Google maintains that the scanning is designed to enhance user experience — such as helping users uncover specific photos through natural language queries — the development has reignited discussions around digital privacy, consent, and the boundaries of AI-driven personalisation in consumer technology.
The initiative is tightly integrated with Gemini, Google’s next-generation AI assistant launched in early 2024 as a successor to Bard and positioned to compete with offerings like OpenAI’s ChatGPT and Microsoft’s Copilot. Gemini’s “Personal Context” feature, which allows the AI to draw insights from users’ Gmail, Drive, and now Photos, aims to deliver more tailored responses by understanding individual habits, preferences, and routines. For example, if a user frequently takes pictures of their garden, Gemini might later suggest planting tips or remind them to water plants based on seasonal patterns detected in image metadata and visual content. Google states that this processing occurs primarily on-device for Android users or within its secure cloud infrastructure, with data not used for advertising purposes.
Though, the automatic scanning of personal photos without explicit, granular opt-in consent has raised concerns among privacy advocates and data protection experts. Unlike earlier features that required users to manually enable face grouping or object recognition, the current update appears to activate image analysis by default for accounts using Gemini’s personalisation tools. Google’s support pages indicate that users can disable “Personal Context” in their Gemini settings, which halts the use of Photos data for AI training and contextual features — though it remains unclear whether previously scanned images are retained or deleted upon deactivation. The company has not published a detailed technical whitepaper outlining the retention, encryption, or audit protocols for this specific data stream.
To verify the scope and mechanics of this update, World Today Journal consulted Google’s official AI Principles page, the Gemini Privacy Hub, and recent updates to the Google Account activity controls. These sources confirm that while Google does not sell personal photos or use them to build ad profiles, the company does process visual content to improve its AI models’ understanding of user context — a practice covered under its broader Terms of Service. Notably, the scanning applies to all photos backed up to Google Photos, regardless of whether they were taken on Android, iOS, or uploaded via web, meaning iPhone users who rely on Google Photos for cloud storage are also affected.
Experts from the Electronic Frontier Foundation (EFF) and Access Now have urged Google to adopt a more transparent, opt-in approach for such sensitive data processing, particularly given the potential for misuse or unintended inferences from image content — such as detecting health conditions, religious practices, or political affiliations through visual cues. In response to inquiries, Google reiterated that its AI systems are designed to avoid making sensitive inferences and that users retain control over their data through privacy settings. The company also pointed to its use of federated learning and differential privacy techniques in other products as evidence of its commitment to minimising privacy risks, though it has not confirmed whether these methods are currently applied to photo scanning for Gemini.
As regulatory scrutiny intensifies globally — particularly under the EU’s AI Act and ongoing investigations by Ireland’s Data Protection Commission into Google’s AI training practices — the company may face pressure to clarify how it balances innovation with user autonomy. For now, users concerned about privacy are advised to review their Gemini activity controls, disable Personal Context if desired, and consider alternative photo storage solutions with stronger end-to-end encryption guarantees. Google has not announced a timeline for further updates to this feature, but any changes to data processing policies would likely be communicated through its official blog or privacy policy revisions.
Stay informed about developments in AI ethics and data privacy by following trusted technology regulators and independent auditors. Share your thoughts on how companies should balance personalisation with privacy in the comments below.