Being of mixed racial descent, the AI results were pretty abysmal. My images fulfilled practically every Asian businessman stereotype.
It occurred to me that the image capture from photos is flat, but some websites (eg. Mister Spex glasses "virtual try on") already capture the face dynamically. This would enable 3D mapping and give the AI much better training detail to work with.
The two inputs combined should be much more effective than photos alone, and would also reduce the dependency on library images (that turned me into a bad stock photo). So how about building this in as an additional feature alongside the photo uploader?