Week 9: Reading Response - kalibirdsall/Creative-Coding-Class-Wiki GitHub Wiki
Excavating AI, by Kate Crawford and Trevor Paglan
This text examines the ethical issues surrounding AI datasets and the biases they contain. I find this whole topic incredibly important to know about. I'd read the occasional snippet here and there over the years about AI being biased, but I'd never really put much time into thinking about it. The articles we've read in this class about this subject have really drawn my attention to the problems surrounding the datasets that AI uses to train on, and then to generate all it's output. It seems obvious now that I've read about it and thought more about it that, yes, of course AI is going to be just as biased as humans if they are trained by digesting all the crap that humans generate. Humans are trained in the same way - the person watching Fox News is being trained on the Fox News dataset and they will come away full of Fox News lies and biases. Most people (myself included until reading more and thinking about it more) think technology is a neutral tool, built on unbiased math and statistics, but anything that humans make will always be full of biases and will always have some underlying point of view and agenda.
Is it even possible to fix this problem? If humans make something, won't there always be some sort of bias in it? I hope the smart people working on this find some solutions. I've heard many calls on AI companies to put more emphasis on ethics and removing bias from their products. I'm sure it's possible to fix them to some extent, but I wonder how much is unfixable considering the dataset is ultimately generated by humans and all their problems.