Adobe is venturing into blending artificial intelligence with diversity, equity, and inclusion (DEI) initiatives. But is it foolproof?
Adobe’s upcoming patent revolves around a “diversity auditing” system that harnesses computer vision technology. The premise involves facial recognition and image sorting capabilities to categorize employee photos based on distinct physical attributes.
The mechanism scrutinizes numerous images to detect faces. Subsequently, it classifies each face concerning a projected “sensitive attribute” affiliated with “protected individual classes,” which might refer to factors such as race, age, or gender. An illustration by Adobe suggests that the system could assess images from a corporate site and juxtapose its inferences with a “comparison population”, which may encompass official census data, job metrics, or even the firm’s internal diversity dossier.
Using machine learning, a “diversity score” emerges after comparing the categorized images with the reference population. The system takes it a step further by enhancing the image collection to achieve a predefined diversity benchmark via “supplementary procured images.”
While traditional diversity audit tools require exhaustive manual efforts and are not scalable for extensive image batches, Adobe proposes a more streamlined solution.
However, Mukul Dhankhar, the visionary behind the computer vision enterprise Mashgin, has shared some reservations. Extracting diversity metrics from images is intricate. He questions the veracity of determining age, gender, or racial identity solely based on visual cues. Moreover, the nuances of multi-racial or gender non-conforming identities aren’t explicitly addressed in the proposal.
Dhankhar elaborates, “The patent mentions a singular diversity score. Yet, diversity is multifaceted.” He prompts Adobe to reconsider how the model will interpret a male-dominated image batch featuring varied ethnicities, or its response to stereotypical depictions.
Furthermore, Adobe’s patent should delve deeper into ensuring that the AI system is devoid of inherent biases – biases that could inadvertently originate from its developers. “The proposal remains silent on measures to ensure the model’s impartiality,” observes Dhankhar.
Nevertheless, Dhankhar acknowledges the promise this technology holds. If implemented judiciously, it could serve as a tool to ascertain the inclusivity of datasets used in AI training. And if it transcends visual data to incorporate textual information, its applications could be even more revolutionary.
Dhankhar concludes, “Recognizing data biases is commendable, but this patent requires a deeper dive into specifics.”
Adobe’s integration of AI into the realm of DEI signals a broader shift towards leveraging technology for inclusionary practices. While its proposed system promises efficiency and scalability, the intricacies of diversity necessitate meticulous handling. As with many technological solutions, the key will be in ensuring that the AI’s interpretation of diversity is as multifaceted and inclusive as the human reality it aims to represent. Companies like Adobe stand at the crossroads of innovation and ethics, underscoring the paramount importance of thoughtful technology application in sensitive areas.