Camera pipelines have historically produced worse results for darker skin tones. Google states it “diversified the images we use to train our models…regardless of skin tone and…lighting conditions” (technical overview) and explicitly calls out historic exclusion and the goal of “more equitable” imaging (image equity post). The Real Tone hub describes partnering with photographers to portray “more than 60 individuals” (community build). Lesson: fairness requires data, evaluation, and community-in-the-loop testing.
When imaging systems consistently misrepresent some faces, the harm is both emotional and practical: people are excluded from “looking like themselves” in personal and professional contexts. Biased camera output also compounds in downstream systems that rely on images (photo enhancement, sharing, archiving). The benefit of fixing it is universal: multi-person photos work better, and users gain trust that the product sees them. Build inclusion into model training, not just marketing, and publish what you changed.
Join "Resonate", my weekly series that puts the best examples, tips, and insights for designing products that resonate with everyone, everywhere.
Join The Newsletter