Three Indian American researchers from Purdue University have developed a groundbreaking system to safeguard personal identities during AI photo editing by limiting the detection of key attributes.
Three Indian American researchers at Purdue University have created a patent-pending system designed to protect against identity leakage during AI photo editing. This innovative tool reduces the ability of artificial intelligence to detect sensitive attributes such as eye color and facial hair.
The system, developed by Vaneet Aggarwal, Dipesh Tamboli, and Vineet Punyamoorty, is utilized before and after photos are uploaded to an AI editing platform. According to a media release from the West Lafayette, Indiana-based public research university, this technology aims to assist consumers, businesses, and institutions in editing and sharing profile photos, ID images, and personal pictures without compromising their private identities.
“Results of validation testing show that we can preserve editing quality while dramatically reducing what AI models can learn about your identity,” Aggarwal stated. “This is a critical step toward trustworthy generative AI.” Their research has been published in the peer-reviewed journal IEEE Transactions on Artificial Intelligence.
Aggarwal holds the title of University Faculty Scholar and serves as the Reilly Professor of Industrial Engineering, with additional appointments in the Department of Computer Science and the Elmore Family School of Electrical and Computer Engineering. Both Tamboli, a doctoral alumnus, and Punyamoorty, a doctoral candidate in computer and electrical engineering, have worked in Aggarwal’s research group.
“Our system allows users to mask sensitive regions on their photo, like the face, from an AI editing service,” Tamboli explained. “Those regions are masked locally on the user’s device using a detailed outline of the region.” He added that only the masked image is sent to the AI editing service. “After the image is edited by AI, our system reintegrates the sensitive region back into the edited image using geometric alignment and blending,” he noted.
Aggarwal emphasized that the Purdue system is the first solution to provide full privacy, as sensitive data never leaves the user’s device. This approach not only produces seamless, natural results in the final edited image but is also compatible with any commercial generative AI model, eliminating the need for retraining.
“It’s privacy by design,” Aggarwal said. “With our system, the AI platform never sees the face, but the final edited image still looks completely natural.” The researchers have disclosed their system to the Purdue Innovates Office of Technology Commercialization, which has applied for a patent to protect the intellectual property.
Addressing the privacy risks associated with AI editing tools, Tamboli noted that modern generative AI technologies edit photos with impressive realism but require users to upload full, unaltered images to cloud-based systems. These images often contain private details, including facial features and identifying characteristics.
“Requiring full, unaltered images creates serious privacy and security risks,” he said. “Once a photo is uploaded, users lose control over where their biometric data goes, how it is stored, or how it might be misused.” Tamboli criticized previous privacy approaches that relied on blurring sensitive regions, locking parts of an image, using stylization filters, or avoiding cloud uploads entirely, stating that these methods fail to fully protect personal identity.
The research team validated their system by testing how well leading AI foundation models could infer biometric attributes from masked versus unmasked images. They discovered that the Purdue system significantly reduced the ability of AI models to detect attributes such as eye color, facial hair, and age group. In some instances, the accuracy of attribute classification dropped by more than 80%, demonstrating robust protection against identity leakage.
The researchers are actively working to bring this technology closer to real-world deployment, with plans to expand the system’s capabilities to protect additional sensitive features, including medical details, ID documents, and other privacy-critical content.
This innovative development highlights the ongoing efforts of researchers to address privacy concerns in the rapidly evolving landscape of AI technology, ensuring that personal identities remain secure in the digital age.
According to The American Bazaar, the Purdue Innovates Office of Technology Commercialization is committed to advancing this technology for broader application.

