After a Reuters article about Amazon shutting down its hiring algorithm because it was biased against women, I began re-reading articles I kept for future research on the outcomes of data and AI on political, sexual, and racial minorities. I’m sharing them here. I hope they are enlightening.
The logarithms for AI are only as biased as the people writing them. The whole facial recognition software thing has weirded me out for a while. I don’t see how there’s a significant benefit to it, without it also being used for nefarious purposes by governments and law enforcement.
Absolutely. There’s a tendency to think that the technology deployed is neutral but we forget to take into account the positionality of the people deploying it.