I’m in an AI masters program called AI ethics I can’t past solving the problem of human bias in coding algorithms even saying you don’t have any biases is a bias. I can’t image how to solve something like that except to intensely identify the biases of the people working in machine learning and target them in a way that would help certain industries and not for example not release prisoners early from sentences based on the fact that they are black. Granted most of us are not aware of our biases this entire response could be biased. What I worry about and I believe there is a committee or company of some sort in San Fran that formed is keeping AI use in check. I’m not worried about AI being used. I’m also and forgive my thoughts on this admittedly I have to do more research but AI is also the most wasteful of all tech industries when it comes to energy use is it not? What’s scary is the bare bones feeling that it may be nothing more than a tool using stereotypes to make life altering decisions for people.